[ClusterLabs Developers] corosync works but pacemaker is started and both processes exit
shiguo ma
mashiguo279 at gmail.com
Wed Nov 2 02:35:41 UTC 2022
tail -f /var/log/pacemaker/pacemaker.log
Oct 28 13:46:55 node-2 pacemakerd [12941] (crm_log_init) info: Changed
active directory to /var/lib/pacemaker/cores
Oct 28 13:46:55 node-2 pacemakerd [12941] (ipc_post_disconnect) info:
Disconnected from launcher IPC API
Oct 28 13:46:55 node-2 pacemakerd [12941] (mcp_read_config) info: Could not
connect to Corosync CMAP: CS_ERR_LIBRARY (retrying in 1s) | rc=2
Oct 28 13:46:56 node-2 pacemakerd [12941] (mcp_read_config) info: Could not
connect to Corosync CMAP: CS_ERR_LIBRARY (retrying in 2s) | rc=2
Oct 28 13:46:58 node-2 pacemakerd [12941] (mcp_read_config) info: Could not
connect to Corosync CMAP: CS_ERR_LIBRARY (retrying in 3s) | rc=2
Oct 28 13:47:01 node-2 pacemakerd [12941] (mcp_read_config) info: Could not
connect to Corosync CMAP: CS_ERR_LIBRARY (retrying in 4s) | rc=2
Oct 28 13:47:05 node-2 pacemakerd [12941] (mcp_read_config) info: Could not
connect to Corosync CMAP: CS_ERR_LIBRARY (retrying in 5s) | rc=2
Oct 28 13:47:10 node-2 pacemakerd [12941] (mcp_read_config) crit: Could not
connect to Corosync CMAP: CS_ERR_LIBRARY | rc=2
Oct 28 13:47:10 node-2 pacemakerd [12941] (crm_exit) info: Exiting
pacemakerd | with status 69
vim /etc/corosync/corosync.conf
Please read the corosync.conf.5 manual page
totem {
version: 2
# Set name of the cluster
cluster_name: ExampleCluster
secauth: off
# crypto_cipher and crypto_hash: Used for mutual node authentication.
# If you choose to enable this, then do remember to create a shared
# secret with "corosync-keygen".
# enabling crypto_cipher, requires also enabling of crypto_hash.
# crypto works only with knet transport
crypto_cipher: none
crypto_hash: none
#transport:udpu
}
interface {
ringnumber: 0 #回环号码,若主机有多块网卡,避免心跳汇流
bindnetaddr: 60.60.60.0
#心跳网段,corosync会自动判断本地网卡上配置的哪个IP地址是属于这个网络的,并把这个接口作为多播心跳信息传递的接口
mcastaddr: 226.94.1.1 #心跳信息组播地址(所有节点必须一致)
mcastport: 5405 #组播端口
ttl: 1 #只向外多播ttl为1的报文,防止发生环路
}
logging {
# Log the source file and line where messages are being
# generated. When in doubt, leave off. Potentially useful for
# debugging.
fileline: off
# Log to standard error. When in doubt, set to yes. Useful when
# running in the foreground (when invoking "corosync -f")
to_stderr: yes
# Log to a log file. When set to "no", the "logfile" option
# must not be set.
to_logfile: yes
logfile: /var/log/cluster/corosync.log
# Log to the system log daemon. When in doubt, set to yes.
to_syslog: yes
# Log debug messages (very verbose). When in doubt, leave off.
debug: off
# Log messages with time stamps. When in doubt, set to hires (or on)
#timestamp: hires
logger_subsys {
subsys: QUORUM
debug: off
}
}
quorum {
# Enable and configure quorum subsystem (default: off)
# see also corosync.conf.5 and votequorum.5
provider: corosync_votequorum
}
nodelist {
# Change/uncomment/add node sections to match cluster configuration
node {
# Hostname of the node
name: node-1
# Cluster membership node identifier
nodeid: 1
# Address of first link
ring0_addr: node-1
# When knet transport is used it's possible to define up to 8 links
ring1_addr: 60.60.60.84
}
node {
# Hostname of the node
name: node-2
# Cluster membership node identifier
nodeid: 2
# Address of first link
ring0_addr: node-2
# When knet transport is used it's possible to define up to 8 links
ring1_addr: 60.60.60.119
}
# ...
service {
var: 0
name: pacemaker
}
}
[image: 图片.png]
[image: 图片.png]
Attached is the log in debug mode
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/developers/attachments/20221102/3704d56d/attachment-0001.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: ??.png
Type: image/png
Size: 146780 bytes
Desc: not available
URL: <https://lists.clusterlabs.org/pipermail/developers/attachments/20221102/3704d56d/attachment-0002.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: ??.png
Type: image/png
Size: 168981 bytes
Desc: not available
URL: <https://lists.clusterlabs.org/pipermail/developers/attachments/20221102/3704d56d/attachment-0003.png>
-------------- next part --------------
Oct 31 09:26:00 [17028] node-1 corosync notice [MAIN ] main.c:1397 Corosync Cluster Engine 3.1.6 starting up
Oct 31 09:26:00 [17028] node-1 corosync info [MAIN ] main.c:1398 Corosync built-in features: pie relro bindnow
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemip.c:412 totemip_parse: IPv4 address of node-1 resolved as 60.60.60.84
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemip.c:412 totemip_parse: IPv4 address of node-1 resolved as 60.60.60.84
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemip.c:412 totemip_parse: IPv4 address of node-2 resolved as 60.60.60.119
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemconfig.c:1238 Configuring link 0 params
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemip.c:412 totemip_parse: IPv4 address of node-1 resolved as 60.60.60.84
Oct 31 09:26:00 [17028] node-1 corosync debug [MAIN ] main.c:1218 Moving main pid to cgroup v1 root cgroup
Oct 31 09:26:00 [17028] node-1 corosync debug [QB ] ringbuffer.c:238 shm size:8388621; real_size:8392704; rb->word_size:2098176
Oct 31 09:26:00 [17028] node-1 corosync debug [MAIN ] main.c:1560 Corosync TTY detached
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totempg.c:286 waiting_trans_ack changed to 1
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemsrp.c:887 Token Timeout (3000 ms) retransmit timeout (714 ms)
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemsrp.c:892 Token warning every 2250 ms (75% of Token Timeout)
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemsrp.c:904 token hold (561 ms) retransmits before loss (4 retrans)
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemsrp.c:911 join (50 ms) send_join (0 ms) consensus (3600 ms) merge (200 ms)
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemsrp.c:914 downcheck (1000 ms) fail to recv const (2500 msgs)
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemsrp.c:916 seqno unchanged const (30 rotations) Maximum network MTU 1410
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemsrp.c:920 window size per rotation (50 messages) maximum messages per rotation (17 messages)
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemsrp.c:924 missed count const (5 messages)
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemsrp.c:927 send threads (0 threads)
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemsrp.c:930 heartbeat_failures_allowed (0)
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemsrp.c:932 max_network_delay (50 ms)
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemsrp.c:955 HeartBeat is Disabled. To enable set heartbeat_failures_allowed > 0
Oct 31 09:26:00 [17028] node-1 corosync notice [TOTEM ] totemnet.c:287 Initializing transport (UDP/IP Unicast).
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemudpu.c:887 Local receive multicast loop socket recv buffer size (4194304 bytes).
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemudpu.c:893 Local transmit multicast loop socket send buffer size (4194304 bytes).
Oct 31 09:26:00 [17028] node-1 corosync notice [TOTEM ] totemudpu.c:683 The network interface [60.60.60.84] is now up.
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemsrp.c:5136 Created or loaded sequence id 1.b2 for this ring.
Oct 31 09:26:00 [17028] node-1 corosync notice [SERV ] service.c:174 Service engine loaded: corosync configuration map access [0]
Oct 31 09:26:00 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:802 Initializing IPC on cmap [0]
Oct 31 09:26:00 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:748 No configured system.qb_ipc_type. Using native ipc
Oct 31 09:26:00 [17028] node-1 corosync info [QB ] ipc_setup.c:537 server name: cmap
Oct 31 09:26:00 [17028] node-1 corosync notice [SERV ] service.c:174 Service engine loaded: corosync configuration service [1]
Oct 31 09:26:00 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:802 Initializing IPC on cfg [1]
Oct 31 09:26:00 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:748 No configured system.qb_ipc_type. Using native ipc
Oct 31 09:26:00 [17028] node-1 corosync info [QB ] ipc_setup.c:537 server name: cfg
Oct 31 09:26:00 [17028] node-1 corosync notice [SERV ] service.c:174 Service engine loaded: corosync cluster closed process group service v1.01 [2]
Oct 31 09:26:00 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:802 Initializing IPC on cpg [2]
Oct 31 09:26:00 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:748 No configured system.qb_ipc_type. Using native ipc
Oct 31 09:26:00 [17028] node-1 corosync info [QB ] ipc_setup.c:537 server name: cpg
Oct 31 09:26:00 [17028] node-1 corosync notice [SERV ] service.c:174 Service engine loaded: corosync profile loading service [4]
Oct 31 09:26:00 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:788 NOT Initializing IPC on pload [4]
Oct 31 09:26:00 [17028] node-1 corosync notice [QUORUM] vsf_quorum.c:438 Using quorum provider corosync_votequorum
Oct 31 09:26:00 [17028] node-1 corosync debug [VOTEQ ] votequorum.c:1255 Reading configuration (runtime: 0)
Oct 31 09:26:00 [17028] node-1 corosync debug [VOTEQ ] votequorum.c:1517 ev_tracking=0, ev_tracking_barrier = 0: expected_votes = 0
Oct 31 09:26:00 [17028] node-1 corosync debug [VOTEQ ] votequorum.c:1123 total_votes=1, expected_votes=2
Oct 31 09:26:00 [17028] node-1 corosync debug [VOTEQ ] votequorum.c:925 node 1 state=1, votes=1, expected=2
Oct 31 09:26:00 [17028] node-1 corosync debug [VOTEQ ] votequorum.c:769 flags: quorate: No Leaving: No WFA Status: No First: Yes Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Oct 31 09:26:00 [17028] node-1 corosync notice [SERV ] service.c:174 Service engine loaded: corosync vote quorum service v1.0 [5]
Oct 31 09:26:00 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:802 Initializing IPC on votequorum [5]
Oct 31 09:26:00 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:748 No configured system.qb_ipc_type. Using native ipc
Oct 31 09:26:00 [17028] node-1 corosync info [QB ] ipc_setup.c:537 server name: votequorum
Oct 31 09:26:00 [17028] node-1 corosync notice [SERV ] service.c:174 Service engine loaded: corosync cluster quorum service v0.1 [3]
Oct 31 09:26:00 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:802 Initializing IPC on quorum [3]
Oct 31 09:26:00 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:748 No configured system.qb_ipc_type. Using native ipc
Oct 31 09:26:00 [17028] node-1 corosync info [QB ] ipc_setup.c:537 server name: quorum
Oct 31 09:26:00 [17028] node-1 corosync info [TOTEM ] totemconfig.c:1277 Configuring link 0
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemconfig.c:1214 adding dynamic member 60.60.60.84 for ring 0
Oct 31 09:26:00 [17028] node-1 corosync notice [TOTEM ] totemudpu.c:1327 adding new UDPU member {60.60.60.84}
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemconfig.c:1214 adding dynamic member 60.60.60.119 for ring 0
Oct 31 09:26:00 [17028] node-1 corosync notice [TOTEM ] totemudpu.c:1327 adding new UDPU member {60.60.60.119}
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemsrp.c:2226 entering GATHER state from 15(interface change).
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemsrp.c:3289 Creating commit token because I am the rep.
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemsrp.c:1569 Saving state aru 0 high seq received 0
Oct 31 09:26:00 [17028] node-1 corosync debug [MAIN ] main.c:710 Storing new sequence id for ring b6
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemsrp.c:2276 entering COMMIT state.
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemsrp.c:4853 got commit token
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemsrp.c:2313 entering RECOVERY state.
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemsrp.c:2359 position [0] member 1:
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemsrp.c:2362 previous ringid (1.b2)
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemsrp.c:2368 aru 0 high delivered 0 received flag 1
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemsrp.c:2469 Did not need to originate any messages in recovery.
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemsrp.c:4853 got commit token
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemsrp.c:4918 Sending initial ORF token
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemsrp.c:4076 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 0, aru 0
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemsrp.c:4087 install seq 0 aru 0 high seq received 0
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemsrp.c:4076 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 1, aru 0
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemsrp.c:4087 install seq 0 aru 0 high seq received 0
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemsrp.c:4076 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 2, aru 0
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemsrp.c:4087 install seq 0 aru 0 high seq received 0
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemsrp.c:4076 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 3, aru 0
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemsrp.c:4087 install seq 0 aru 0 high seq received 0
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemsrp.c:4106 retrans flag count 4 token aru 0 install seq 0 aru 0 0
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemsrp.c:1585 Resetting old ring state
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemsrp.c:1862 recovery to regular 1-0
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totempg.c:286 waiting_trans_ack changed to 1
Oct 31 09:26:00 [17028] node-1 corosync debug [MAIN ] main.c:365 Member joined: r(0) ip(60.60.60.84)
Oct 31 09:26:00 [17028] node-1 corosync debug [SYNC ] sync.c:489 call init for locally known services
Oct 31 09:26:00 [17028] node-1 corosync notice [QUORUM] vsf_quorum.c:160 Sync members[1]: 1
Oct 31 09:26:00 [17028] node-1 corosync notice [QUORUM] vsf_quorum.c:160 Sync joined[1]: 1
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totemsrp.c:2132 entering OPERATIONAL state.
Oct 31 09:26:00 [17028] node-1 corosync notice [TOTEM ] totemsrp.c:2138 A new membership (1.b6) was formed. Members joined: 1
Oct 31 09:26:00 [17028] node-1 corosync debug [VOTEQ ] votequorum.c:2086 got nodeinfo message from cluster node 1
Oct 31 09:26:00 [17028] node-1 corosync debug [VOTEQ ] votequorum.c:2091 nodeinfo message[1]: votes: 1, expected: 2 flags: 8
Oct 31 09:26:00 [17028] node-1 corosync debug [VOTEQ ] votequorum.c:769 flags: quorate: No Leaving: No WFA Status: No First: Yes Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Oct 31 09:26:00 [17028] node-1 corosync debug [VOTEQ ] votequorum.c:1123 total_votes=1, expected_votes=2
Oct 31 09:26:00 [17028] node-1 corosync debug [VOTEQ ] votequorum.c:925 node 1 state=1, votes=1, expected=2
Oct 31 09:26:00 [17028] node-1 corosync debug [SYNC ] sync.c:310 enter sync process
Oct 31 09:26:00 [17028] node-1 corosync debug [SYNC ] sync.c:215 Committing synchronization for corosync configuration map access
Oct 31 09:26:00 [17028] node-1 corosync debug [CMAP ] cmap.c:455 Single node sync -> no action
Oct 31 09:26:00 [17028] node-1 corosync debug [CPG ] cpg.c:1299 downlist left_list: 0 received
Oct 31 09:26:00 [17028] node-1 corosync debug [SYNC ] sync.c:215 Committing synchronization for corosync cluster closed process group service v1.01
Oct 31 09:26:00 [17028] node-1 corosync debug [CPG ] cpg.c:841 my downlist: members(old:0 left:0)
Oct 31 09:26:00 [17028] node-1 corosync debug [SYNC ] sync.c:215 Committing synchronization for corosync cluster quorum service v0.1
Oct 31 09:26:00 [17028] node-1 corosync debug [QUORUM] vsf_quorum.c:650 sending nodelist notification to (nil), length = 72
Oct 31 09:26:00 [17028] node-1 corosync debug [VOTEQ ] votequorum.c:769 flags: quorate: No Leaving: No WFA Status: No First: Yes Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Oct 31 09:26:00 [17028] node-1 corosync debug [VOTEQ ] votequorum.c:1853 Sending nodelist callback. ring_id = 1.b6
Oct 31 09:26:00 [17028] node-1 corosync debug [VOTEQ ] votequorum.c:2086 got nodeinfo message from cluster node 1
Oct 31 09:26:00 [17028] node-1 corosync debug [VOTEQ ] votequorum.c:2091 nodeinfo message[1]: votes: 1, expected: 2 flags: 8
Oct 31 09:26:00 [17028] node-1 corosync debug [VOTEQ ] votequorum.c:769 flags: quorate: No Leaving: No WFA Status: No First: Yes Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Oct 31 09:26:00 [17028] node-1 corosync debug [VOTEQ ] votequorum.c:1123 total_votes=1, expected_votes=2
Oct 31 09:26:00 [17028] node-1 corosync debug [VOTEQ ] votequorum.c:925 node 1 state=1, votes=1, expected=2
Oct 31 09:26:00 [17028] node-1 corosync debug [VOTEQ ] votequorum.c:2086 got nodeinfo message from cluster node 1
Oct 31 09:26:00 [17028] node-1 corosync debug [VOTEQ ] votequorum.c:2091 nodeinfo message[0]: votes: 0, expected: 0 flags: 0
Oct 31 09:26:00 [17028] node-1 corosync debug [SYNC ] sync.c:215 Committing synchronization for corosync vote quorum service v1.0
Oct 31 09:26:00 [17028] node-1 corosync debug [VOTEQ ] votequorum.c:1123 total_votes=1, expected_votes=2
Oct 31 09:26:00 [17028] node-1 corosync debug [VOTEQ ] votequorum.c:925 node 1 state=1, votes=1, expected=2
Oct 31 09:26:00 [17028] node-1 corosync notice [QUORUM] vsf_quorum.c:160 Members[1]: 1
Oct 31 09:26:00 [17028] node-1 corosync debug [QUORUM] vsf_quorum.c:569 sending quorum notification to (nil), length = 52/60
Oct 31 09:26:00 [17028] node-1 corosync debug [VOTEQ ] votequorum.c:1792 Sending quorum callback, quorate = 0
Oct 31 09:26:00 [17028] node-1 corosync notice [MAIN ] main.c:304 Completed service synchronization, ready to provide service.
Oct 31 09:26:00 [17028] node-1 corosync debug [TOTEM ] totempg.c:286 waiting_trans_ack changed to 0
Oct 31 09:26:00 [17028] node-1 corosync debug [QB ] ipc_setup.c:696 IPC credentials authenticated (/dev/shm/qb-17029-17033-19-RvJElh/qb)
Oct 31 09:26:00 [17028] node-1 corosync debug [QB ] ipc_shm.c:286 connecting to client [17033]
Oct 31 09:26:00 [17028] node-1 corosync debug [QB ] ringbuffer.c:238 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:00 [17028] node-1 corosync debug [QB ] ringbuffer.c:238 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:00 [17028] node-1 corosync debug [QB ] ringbuffer.c:238 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:00 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:269 connection created
Oct 31 09:26:00 [17028] node-1 corosync debug [CPG ] cpg.c:1553 lib_init_fn: conn=0x556b0e7cf840, cpd=0x556b0e7d02bc
Oct 31 09:26:00 [17028] node-1 corosync debug [QB ] ipc_setup.c:696 IPC credentials authenticated (/dev/shm/qb-17029-17033-20-mifdaq/qb)
Oct 31 09:26:00 [17028] node-1 corosync debug [QB ] ipc_shm.c:286 connecting to client [17033]
Oct 31 09:26:00 [17028] node-1 corosync debug [QB ] ringbuffer.c:238 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:00 [17028] node-1 corosync debug [QB ] ringbuffer.c:238 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:00 [17028] node-1 corosync debug [QB ] ringbuffer.c:238 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:00 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:269 connection created
Oct 31 09:26:00 [17028] node-1 corosync debug [CPG ] cpg.c:2130 cpg iteration initialize
Oct 31 09:26:00 [17028] node-1 corosync debug [CPG ] cpg.c:2269 cpg iteration next
Oct 31 09:26:00 [17028] node-1 corosync debug [CPG ] cpg.c:2321 cpg iteration finalize
Oct 31 09:26:00 [17028] node-1 corosync debug [CPG ] cpg.c:1676 cpg finalize for conn=0x556b0e7cf840
Oct 31 09:26:00 [17028] node-1 corosync debug [QB ] ipcs.c:760 HUP conn (/dev/shm/qb-17029-17033-19-RvJElh/qb)
Oct 31 09:26:00 [17028] node-1 corosync debug [QB ] ipcs.c:606 qb_ipcs_disconnect(/dev/shm/qb-17029-17033-19-RvJElh/qb) state:2
Oct 31 09:26:00 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:346 cs_ipcs_connection_closed()
Oct 31 09:26:00 [17028] node-1 corosync debug [CPG ] cpg.c:1070 exit_fn for conn=0x556b0e7cf840
Oct 31 09:26:00 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:325 cs_ipcs_connection_destroyed()
Oct 31 09:26:00 [17028] node-1 corosync debug [QB ] ringbuffer_helper.c:337 Free'ing ringbuffer: /dev/shm/qb-17029-17033-19-RvJElh/qb-response-cpg-header
Oct 31 09:26:00 [17028] node-1 corosync debug [QB ] ringbuffer_helper.c:337 Free'ing ringbuffer: /dev/shm/qb-17029-17033-19-RvJElh/qb-event-cpg-header
Oct 31 09:26:00 [17028] node-1 corosync debug [QB ] ringbuffer_helper.c:337 Free'ing ringbuffer: /dev/shm/qb-17029-17033-19-RvJElh/qb-request-cpg-header
Oct 31 09:26:00 [17028] node-1 corosync debug [QB ] ipcs.c:760 HUP conn (/dev/shm/qb-17029-17033-20-mifdaq/qb)
Oct 31 09:26:00 [17028] node-1 corosync debug [QB ] ipcs.c:606 qb_ipcs_disconnect(/dev/shm/qb-17029-17033-20-mifdaq/qb) state:2
Oct 31 09:26:00 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:346 cs_ipcs_connection_closed()
Oct 31 09:26:00 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:325 cs_ipcs_connection_destroyed()
Oct 31 09:26:00 [17028] node-1 corosync debug [QB ] ringbuffer_helper.c:337 Free'ing ringbuffer: /dev/shm/qb-17029-17033-20-mifdaq/qb-response-cfg-header
Oct 31 09:26:00 [17028] node-1 corosync debug [QB ] ringbuffer_helper.c:337 Free'ing ringbuffer: /dev/shm/qb-17029-17033-20-mifdaq/qb-event-cfg-header
Oct 31 09:26:00 [17028] node-1 corosync debug [QB ] ringbuffer_helper.c:337 Free'ing ringbuffer: /dev/shm/qb-17029-17033-20-mifdaq/qb-request-cfg-header
Oct 31 09:26:00 [17028] node-1 corosync debug [QB ] ipc_setup.c:696 IPC credentials authenticated (/dev/shm/qb-17029-17035-19-TZ4b1y/qb)
Oct 31 09:26:00 [17028] node-1 corosync debug [QB ] ipc_shm.c:286 connecting to client [17035]
Oct 31 09:26:00 [17028] node-1 corosync debug [QB ] ringbuffer.c:238 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:00 [17028] node-1 corosync debug [QB ] ringbuffer.c:238 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:00 [17028] node-1 corosync debug [QB ] ringbuffer.c:238 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:00 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:269 connection created
Oct 31 09:26:00 [17028] node-1 corosync debug [QB ] ipc_setup.c:696 IPC credentials authenticated (/dev/shm/qb-17029-17035-20-GpZUSH/qb)
Oct 31 09:26:00 [17028] node-1 corosync debug [QB ] ipc_shm.c:286 connecting to client [17035]
Oct 31 09:26:00 [17028] node-1 corosync debug [QB ] ringbuffer.c:238 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:00 [17028] node-1 corosync debug [QB ] ringbuffer.c:238 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:00 [17028] node-1 corosync debug [QB ] ringbuffer.c:238 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:00 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:269 connection created
Oct 31 09:26:00 [17028] node-1 corosync debug [CMAP ] cmap.c:373 lib_init_fn: conn=0x556b0e7d2e40
Oct 31 09:26:00 [17028] node-1 corosync debug [QB ] ipcs.c:760 HUP conn (/dev/shm/qb-17029-17035-20-GpZUSH/qb)
Oct 31 09:26:00 [17028] node-1 corosync debug [QB ] ipcs.c:606 qb_ipcs_disconnect(/dev/shm/qb-17029-17035-20-GpZUSH/qb) state:2
Oct 31 09:26:00 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:346 cs_ipcs_connection_closed()
Oct 31 09:26:00 [17028] node-1 corosync debug [CMAP ] cmap.c:393 exit_fn for conn=0x556b0e7d2e40
Oct 31 09:26:00 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:325 cs_ipcs_connection_destroyed()
Oct 31 09:26:00 [17028] node-1 corosync debug [QB ] ringbuffer_helper.c:337 Free'ing ringbuffer: /dev/shm/qb-17029-17035-20-GpZUSH/qb-response-cmap-header
Oct 31 09:26:00 [17028] node-1 corosync debug [QB ] ringbuffer_helper.c:337 Free'ing ringbuffer: /dev/shm/qb-17029-17035-20-GpZUSH/qb-event-cmap-header
Oct 31 09:26:00 [17028] node-1 corosync debug [QB ] ringbuffer_helper.c:337 Free'ing ringbuffer: /dev/shm/qb-17029-17035-20-GpZUSH/qb-request-cmap-header
Oct 31 09:26:00 [17028] node-1 corosync debug [QB ] ipcs.c:760 HUP conn (/dev/shm/qb-17029-17035-19-TZ4b1y/qb)
Oct 31 09:26:00 [17028] node-1 corosync debug [QB ] ipcs.c:606 qb_ipcs_disconnect(/dev/shm/qb-17029-17035-19-TZ4b1y/qb) state:2
Oct 31 09:26:00 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:346 cs_ipcs_connection_closed()
Oct 31 09:26:00 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:325 cs_ipcs_connection_destroyed()
Oct 31 09:26:00 [17028] node-1 corosync debug [QB ] ringbuffer_helper.c:337 Free'ing ringbuffer: /dev/shm/qb-17029-17035-19-TZ4b1y/qb-response-cfg-header
Oct 31 09:26:00 [17028] node-1 corosync debug [QB ] ringbuffer_helper.c:337 Free'ing ringbuffer: /dev/shm/qb-17029-17035-19-TZ4b1y/qb-event-cfg-header
Oct 31 09:26:00 [17028] node-1 corosync debug [QB ] ringbuffer_helper.c:337 Free'ing ringbuffer: /dev/shm/qb-17029-17035-19-TZ4b1y/qb-request-cfg-header
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ipc_setup.c:696 IPC credentials authenticated (/dev/shm/qb-17029-17067-19-H2QSVX/qb)
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ipc_shm.c:286 connecting to client [17067]
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ringbuffer.c:238 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ringbuffer.c:238 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ringbuffer.c:238 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:02 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:269 connection created
Oct 31 09:26:02 [17028] node-1 corosync debug [CMAP ] cmap.c:373 lib_init_fn: conn=0x556b0e7d1a80
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ipc_setup.c:696 IPC credentials authenticated (/dev/shm/qb-17029-17067-20-eqFHZd/qb)
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ipc_shm.c:286 connecting to client [17067]
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ringbuffer.c:238 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ringbuffer.c:238 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ringbuffer.c:238 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:02 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:269 connection created
Oct 31 09:26:02 [17028] node-1 corosync debug [CMAP ] cmap.c:373 lib_init_fn: conn=0x556b0e7d2e40
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ipcs.c:760 HUP conn (/dev/shm/qb-17029-17067-20-eqFHZd/qb)
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ipcs.c:606 qb_ipcs_disconnect(/dev/shm/qb-17029-17067-20-eqFHZd/qb) state:2
Oct 31 09:26:02 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:346 cs_ipcs_connection_closed()
Oct 31 09:26:02 [17028] node-1 corosync debug [CMAP ] cmap.c:393 exit_fn for conn=0x556b0e7d2e40
Oct 31 09:26:02 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:325 cs_ipcs_connection_destroyed()
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ringbuffer_helper.c:337 Free'ing ringbuffer: /dev/shm/qb-17029-17067-20-eqFHZd/qb-response-cmap-header
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ringbuffer_helper.c:337 Free'ing ringbuffer: /dev/shm/qb-17029-17067-20-eqFHZd/qb-event-cmap-header
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ringbuffer_helper.c:337 Free'ing ringbuffer: /dev/shm/qb-17029-17067-20-eqFHZd/qb-request-cmap-header
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ipcs.c:760 HUP conn (/dev/shm/qb-17029-17067-19-H2QSVX/qb)
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ipcs.c:606 qb_ipcs_disconnect(/dev/shm/qb-17029-17067-19-H2QSVX/qb) state:2
Oct 31 09:26:02 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:346 cs_ipcs_connection_closed()
Oct 31 09:26:02 [17028] node-1 corosync debug [CMAP ] cmap.c:393 exit_fn for conn=0x556b0e7d1a80
Oct 31 09:26:02 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:325 cs_ipcs_connection_destroyed()
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ringbuffer_helper.c:337 Free'ing ringbuffer: /dev/shm/qb-17029-17067-19-H2QSVX/qb-response-cmap-header
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ringbuffer_helper.c:337 Free'ing ringbuffer: /dev/shm/qb-17029-17067-19-H2QSVX/qb-event-cmap-header
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ringbuffer_helper.c:337 Free'ing ringbuffer: /dev/shm/qb-17029-17067-19-H2QSVX/qb-request-cmap-header
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ipc_setup.c:696 IPC credentials authenticated (/dev/shm/qb-17029-17067-19-v8Ep5t/qb)
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ipc_shm.c:286 connecting to client [17067]
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ringbuffer.c:238 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ringbuffer.c:238 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ringbuffer.c:238 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:02 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:269 connection created
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ipc_setup.c:696 IPC credentials authenticated (/dev/shm/qb-17029-17070-20-E6iyfK/qb)
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ipc_shm.c:286 connecting to client [17070]
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ringbuffer.c:238 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ringbuffer.c:238 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ringbuffer.c:238 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:02 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:269 connection created
Oct 31 09:26:02 [17028] node-1 corosync debug [CPG ] cpg.c:1553 lib_init_fn: conn=0x556b0e7d2e20, cpd=0x556b0e7ce15c
Oct 31 09:26:02 [17028] node-1 corosync debug [CPG ] cpg.c:1312 got procjoin message from cluster node 1 (r(0) ip(60.60.60.84) ) for pid 17070
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ipc_setup.c:696 IPC credentials authenticated (/dev/shm/qb-17029-17070-21-F1u0q0/qb)
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ipc_shm.c:286 connecting to client [17070]
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ringbuffer.c:238 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ringbuffer.c:238 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ringbuffer.c:238 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:02 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:269 connection created
Oct 31 09:26:02 [17028] node-1 corosync debug [CMAP ] cmap.c:373 lib_init_fn: conn=0x556b0e7d71a0
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ipcs.c:760 HUP conn (/dev/shm/qb-17029-17070-21-F1u0q0/qb)
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ipcs.c:606 qb_ipcs_disconnect(/dev/shm/qb-17029-17070-21-F1u0q0/qb) state:2
Oct 31 09:26:02 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:346 cs_ipcs_connection_closed()
Oct 31 09:26:02 [17028] node-1 corosync debug [CMAP ] cmap.c:393 exit_fn for conn=0x556b0e7d71a0
Oct 31 09:26:02 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:325 cs_ipcs_connection_destroyed()
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ringbuffer_helper.c:337 Free'ing ringbuffer: /dev/shm/qb-17029-17070-21-F1u0q0/qb-response-cmap-header
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ringbuffer_helper.c:337 Free'ing ringbuffer: /dev/shm/qb-17029-17070-21-F1u0q0/qb-event-cmap-header
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ringbuffer_helper.c:337 Free'ing ringbuffer: /dev/shm/qb-17029-17070-21-F1u0q0/qb-request-cmap-header
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ipc_setup.c:696 IPC credentials authenticated (/dev/shm/qb-17029-17070-21-Al5jEg/qb)
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ipc_shm.c:286 connecting to client [17070]
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ringbuffer.c:238 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ringbuffer.c:238 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ringbuffer.c:238 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:02 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:269 connection created
Oct 31 09:26:02 [17028] node-1 corosync debug [CMAP ] cmap.c:373 lib_init_fn: conn=0x556b0e7d71a0
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ipcs.c:760 HUP conn (/dev/shm/qb-17029-17070-21-Al5jEg/qb)
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ipcs.c:606 qb_ipcs_disconnect(/dev/shm/qb-17029-17070-21-Al5jEg/qb) state:2
Oct 31 09:26:02 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:346 cs_ipcs_connection_closed()
Oct 31 09:26:02 [17028] node-1 corosync debug [CMAP ] cmap.c:393 exit_fn for conn=0x556b0e7d71a0
Oct 31 09:26:02 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:325 cs_ipcs_connection_destroyed()
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ringbuffer_helper.c:337 Free'ing ringbuffer: /dev/shm/qb-17029-17070-21-Al5jEg/qb-response-cmap-header
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ringbuffer_helper.c:337 Free'ing ringbuffer: /dev/shm/qb-17029-17070-21-Al5jEg/qb-event-cmap-header
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ringbuffer_helper.c:337 Free'ing ringbuffer: /dev/shm/qb-17029-17070-21-Al5jEg/qb-request-cmap-header
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ipc_setup.c:696 IPC credentials authenticated (/dev/shm/qb-17029-17070-21-B1A1Sw/qb)
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ipc_shm.c:286 connecting to client [17070]
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ringbuffer.c:238 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ringbuffer.c:238 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ringbuffer.c:238 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:02 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:269 connection created
Oct 31 09:26:02 [17028] node-1 corosync debug [CMAP ] cmap.c:373 lib_init_fn: conn=0x556b0e7d71a0
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ipcs.c:760 HUP conn (/dev/shm/qb-17029-17070-21-B1A1Sw/qb)
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ipcs.c:606 qb_ipcs_disconnect(/dev/shm/qb-17029-17070-21-B1A1Sw/qb) state:2
Oct 31 09:26:02 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:346 cs_ipcs_connection_closed()
Oct 31 09:26:02 [17028] node-1 corosync debug [CMAP ] cmap.c:393 exit_fn for conn=0x556b0e7d71a0
Oct 31 09:26:02 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:325 cs_ipcs_connection_destroyed()
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ringbuffer_helper.c:337 Free'ing ringbuffer: /dev/shm/qb-17029-17070-21-B1A1Sw/qb-response-cmap-header
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ringbuffer_helper.c:337 Free'ing ringbuffer: /dev/shm/qb-17029-17070-21-B1A1Sw/qb-event-cmap-header
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ringbuffer_helper.c:337 Free'ing ringbuffer: /dev/shm/qb-17029-17070-21-B1A1Sw/qb-request-cmap-header
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ipc_setup.c:696 IPC credentials authenticated (/dev/shm/qb-17029-17070-21-cP0m9M/qb)
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ipc_shm.c:286 connecting to client [17070]
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ringbuffer.c:238 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ringbuffer.c:238 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ringbuffer.c:238 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:02 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:269 connection created
Oct 31 09:26:02 [17028] node-1 corosync debug [CMAP ] cmap.c:373 lib_init_fn: conn=0x556b0e7d71a0
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ipcs.c:760 HUP conn (/dev/shm/qb-17029-17070-21-cP0m9M/qb)
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ipcs.c:606 qb_ipcs_disconnect(/dev/shm/qb-17029-17070-21-cP0m9M/qb) state:2
Oct 31 09:26:02 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:346 cs_ipcs_connection_closed()
Oct 31 09:26:02 [17028] node-1 corosync debug [CMAP ] cmap.c:393 exit_fn for conn=0x556b0e7d71a0
Oct 31 09:26:02 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:325 cs_ipcs_connection_destroyed()
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ringbuffer_helper.c:337 Free'ing ringbuffer: /dev/shm/qb-17029-17070-21-cP0m9M/qb-response-cmap-header
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ringbuffer_helper.c:337 Free'ing ringbuffer: /dev/shm/qb-17029-17070-21-cP0m9M/qb-event-cmap-header
Oct 31 09:26:02 [17028] node-1 corosync debug [QB ] ringbuffer_helper.c:337 Free'ing ringbuffer: /dev/shm/qb-17029-17070-21-cP0m9M/qb-request-cmap-header
Oct 31 09:26:12 [17028] node-1 corosync debug [QB ] ipcs.c:760 HUP conn (/dev/shm/qb-17029-17070-20-E6iyfK/qb)
Oct 31 09:26:12 [17028] node-1 corosync debug [QB ] ipcs.c:606 qb_ipcs_disconnect(/dev/shm/qb-17029-17070-20-E6iyfK/qb) state:2
Oct 31 09:26:12 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:346 cs_ipcs_connection_closed()
Oct 31 09:26:12 [17028] node-1 corosync debug [CPG ] cpg.c:1070 exit_fn for conn=0x556b0e7d2e20
Oct 31 09:26:12 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:325 cs_ipcs_connection_destroyed()
Oct 31 09:26:12 [17028] node-1 corosync debug [QB ] ringbuffer_helper.c:337 Free'ing ringbuffer: /dev/shm/qb-17029-17070-20-E6iyfK/qb-response-cpg-header
Oct 31 09:26:12 [17028] node-1 corosync debug [QB ] ringbuffer_helper.c:337 Free'ing ringbuffer: /dev/shm/qb-17029-17070-20-E6iyfK/qb-event-cpg-header
Oct 31 09:26:12 [17028] node-1 corosync debug [QB ] ringbuffer_helper.c:337 Free'ing ringbuffer: /dev/shm/qb-17029-17070-20-E6iyfK/qb-request-cpg-header
Oct 31 09:26:12 [17028] node-1 corosync debug [CPG ] cpg.c:1328 got procleave message from cluster node 1 (r(0) ip(60.60.60.84) ) for pid 17070
Oct 31 09:26:12 [17028] node-1 corosync notice [CFG ] cfg.c:580 Node 1 was shut down by sysadmin
Oct 31 09:26:12 [17028] node-1 corosync notice [SERV ] service.c:373 Unloading all Corosync service engines.
Oct 31 09:26:12 [17028] node-1 corosync info [QB ] ipc_setup.c:593 withdrawing server sockets
Oct 31 09:26:12 [17028] node-1 corosync debug [QB ] ipcs.c:230 qb_ipcs_unref() - destroying
Oct 31 09:26:12 [17028] node-1 corosync notice [SERV ] service.c:240 Service engine unloaded: corosync vote quorum service v1.0
Oct 31 09:26:12 [17028] node-1 corosync info [QB ] ipc_setup.c:593 withdrawing server sockets
Oct 31 09:26:12 [17028] node-1 corosync debug [QB ] ipcs.c:230 qb_ipcs_unref() - destroying
Oct 31 09:26:12 [17028] node-1 corosync notice [SERV ] service.c:240 Service engine unloaded: corosync configuration map access
Oct 31 09:26:12 [17028] node-1 corosync debug [QB ] ipcs.c:606 qb_ipcs_disconnect(/dev/shm/qb-17029-17067-19-v8Ep5t/qb) state:2
Oct 31 09:26:12 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:346 cs_ipcs_connection_closed()
Oct 31 09:26:12 [17028] node-1 corosync debug [MAIN ] ipc_glue.c:325 cs_ipcs_connection_destroyed()
Oct 31 09:26:12 [17028] node-1 corosync debug [QB ] ringbuffer_helper.c:337 Free'ing ringbuffer: /dev/shm/qb-17029-17067-19-v8Ep5t/qb-response-cfg-header
Oct 31 09:26:12 [17028] node-1 corosync debug [QB ] ringbuffer_helper.c:337 Free'ing ringbuffer: /dev/shm/qb-17029-17067-19-v8Ep5t/qb-event-cfg-header
Oct 31 09:26:12 [17028] node-1 corosync debug [QB ] unix.c:292 Unlinking file at dir: qb-event-cfg-data: No such file or directory (2)
Oct 31 09:26:12 [17028] node-1 corosync debug [QB ] unix.c:292 Unlinking file at dir: qb-event-cfg-header: No such file or directory (2)
Oct 31 09:26:12 [17028] node-1 corosync debug [QB ] ringbuffer_helper.c:337 Free'ing ringbuffer: /dev/shm/qb-17029-17067-19-v8Ep5t/qb-request-cfg-header
Oct 31 09:26:12 [17028] node-1 corosync debug [QB ] unix.c:292 Unlinking file at dir: qb-request-cfg-data: No such file or directory (2)
Oct 31 09:26:12 [17028] node-1 corosync debug [QB ] unix.c:292 Unlinking file at dir: qb-request-cfg-header: No such file or directory (2)
Oct 31 09:26:12 [17028] node-1 corosync info [QB ] ipc_setup.c:593 withdrawing server sockets
Oct 31 09:26:12 [17028] node-1 corosync debug [QB ] ipcs.c:230 qb_ipcs_unref() - destroying
Oct 31 09:26:12 [17028] node-1 corosync notice [SERV ] service.c:240 Service engine unloaded: corosync configuration service
Oct 31 09:26:12 [17028] node-1 corosync info [QB ] ipc_setup.c:593 withdrawing server sockets
Oct 31 09:26:12 [17028] node-1 corosync debug [QB ] ipcs.c:230 qb_ipcs_unref() - destroying
Oct 31 09:26:12 [17028] node-1 corosync notice [SERV ] service.c:240 Service engine unloaded: corosync cluster closed process group service v1.01
Oct 31 09:26:12 [17028] node-1 corosync info [QB ] ipc_setup.c:593 withdrawing server sockets
Oct 31 09:26:12 [17028] node-1 corosync debug [QB ] ipcs.c:230 qb_ipcs_unref() - destroying
Oct 31 09:26:12 [17028] node-1 corosync notice [SERV ] service.c:240 Service engine unloaded: corosync cluster quorum service v0.1
Oct 31 09:26:12 [17028] node-1 corosync notice [SERV ] service.c:240 Service engine unloaded: corosync profile loading service
Oct 31 09:26:12 [17028] node-1 corosync debug [TOTEM ] totemsrp.c:3399 sending join/leave message
Oct 31 09:26:12 [17028] node-1 corosync notice [MAIN ] util.c:133 Corosync Cluster Engine exiting normally
-------------- next part --------------
Oct 31 09:26:02 node-1 pacemakerd [17067] (crm_log_init) info: Changed active directory to /var/lib/pacemaker/cores
Oct 31 09:26:02 node-1 pacemakerd [17067] (ipc_post_disconnect) info: Disconnected from launcher IPC API
Oct 31 09:26:02 node-1 pacemakerd [17067] (get_cluster_type) info: Detected an active 'corosync' cluster
Oct 31 09:26:02 node-1 pacemakerd [17067] (mcp_read_config) info: Reading configuration for corosync stack
Oct 31 09:26:02 node-1 pacemakerd [17067] (qb_ipcc_disconnect) debug: qb_ipcc_disconnect()
Oct 31 09:26:02 node-1 pacemakerd [17067] (qb_rb_close_helper) debug: Closing ringbuffer: /dev/shm/qb-17029-17067-19-H2QSVX/qb-request-cmap-header
Oct 31 09:26:02 node-1 pacemakerd [17067] (qb_rb_close_helper) debug: Closing ringbuffer: /dev/shm/qb-17029-17067-19-H2QSVX/qb-response-cmap-header
Oct 31 09:26:02 node-1 pacemakerd [17067] (qb_rb_close_helper) debug: Closing ringbuffer: /dev/shm/qb-17029-17067-19-H2QSVX/qb-event-cmap-header
Oct 31 09:26:02 node-1 pacemakerd [17067] (main) notice: Starting Pacemaker 2.1.4 | build=748a066f4f features:agent-manpages corosync-ge-2 monotonic nagios
Oct 31 09:26:02 node-1 pacemakerd [17067] (qb_ipcs_us_publish) info: server name: pacemakerd
Oct 31 09:26:02 node-1 pacemakerd [17067] (qb_rb_open_2) debug: shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:02 node-1 pacemakerd [17067] (qb_rb_open_2) debug: shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:02 node-1 pacemakerd [17067] (qb_rb_open_2) debug: shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:02 node-1 pacemakerd [17067] (cluster_connect_cfg) debug: Corosync reports local node ID is 1
Oct 31 09:26:02 node-1 pacemakerd [17067] (pcmk__ipc_is_authentic_process_active) info: Could not connect to cib_ro IPC: Connection refused
Oct 31 09:26:02 node-1 pacemakerd [17067] (pcmk__ipc_is_authentic_process_active) info: Could not connect to stonith-ng IPC: Connection refused
Oct 31 09:26:02 node-1 pacemakerd [17067] (pcmk__ipc_is_authentic_process_active) info: Could not connect to lrmd IPC: Connection refused
Oct 31 09:26:02 node-1 pacemakerd [17067] (pcmk__ipc_is_authentic_process_active) info: Could not connect to attrd IPC: Connection refused
Oct 31 09:26:02 node-1 pacemakerd [17067] (pcmk__ipc_is_authentic_process_active) info: Could not connect to pengine IPC: Connection refused
Oct 31 09:26:02 node-1 pacemakerd [17067] (pcmk__ipc_is_authentic_process_active) info: Could not connect to crmd IPC: Connection refused
Oct 31 09:26:02 node-1 pacemakerd [17067] (start_child) info: Using uid=1000 and group=1000 for process pacemaker-based
Oct 31 09:26:02 node-1 pacemakerd [17067] (start_child) info: Forked child 17069 for process pacemaker-based
Oct 31 09:26:02 node-1 pacemakerd [17067] (start_child) info: Forked child 17070 for process pacemaker-fenced
Oct 31 09:26:02 node-1 pacemakerd [17067] (start_child) info: Forked child 17071 for process pacemaker-execd
Oct 31 09:26:02 node-1 pacemakerd [17067] (start_child) info: Using uid=1000 and group=1000 for process pacemaker-attrd
Oct 31 09:26:02 node-1 pacemakerd [17067] (start_child) info: Forked child 17072 for process pacemaker-attrd
Oct 31 09:26:02 node-1 pacemakerd [17067] (start_child) info: Using uid=1000 and group=1000 for process pacemaker-schedulerd
Oct 31 09:26:02 node-1 pacemakerd [17067] (start_child) info: Forked child 17073 for process pacemaker-schedulerd
Oct 31 09:26:02 node-1 pacemakerd [17067] (start_child) info: Using uid=1000 and group=1000 for process pacemaker-controld
Oct 31 09:26:02 node-1 pacemakerd [17067] (start_child) info: Forked child 17074 for process pacemaker-controld
Oct 31 09:26:02 node-1 pacemakerd [17067] (main) notice: Pacemaker daemon successfully started and accepting connections
Oct 31 09:26:02 node-1 pacemakerd [17067] (pcmk_child_exit) warning: Shutting cluster down because pacemaker-based[17069] had fatal failure
Oct 31 09:26:02 node-1 pacemakerd [17067] (pcmk_shutdown_worker) notice: Shutting down Pacemaker
Oct 31 09:26:02 node-1 pacemakerd [17067] (stop_child) notice: Stopping pacemaker-controld | sent signal 15 to process 17074
Oct 31 09:26:02 node-1 pacemakerd [17067] (pcmk_child_exit) warning: Shutting cluster down because pacemaker-schedulerd[17073] had fatal failure
Oct 31 09:26:02 node-1 pacemakerd [17067] (pcmk_shutdown_worker) notice: Shutting down Pacemaker
Oct 31 09:26:02 node-1 pacemakerd [17067] (pcmk_child_exit) warning: Shutting cluster down because pacemaker-attrd[17072] had fatal failure
Oct 31 09:26:02 node-1 pacemakerd [17067] (pcmk_shutdown_worker) notice: Shutting down Pacemaker
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (crm_log_init) info: Changed active directory to /var/lib/pacemaker/cores
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (main) notice: Starting Pacemaker fencer
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (crm_ipc_connect) debug: Could not establish stonith-ng IPC connection: Connection refused (111)
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (get_cluster_type) info: Verifying cluster type: 'corosync'
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (get_cluster_type) info: Assuming an active 'corosync' cluster
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (crm_cluster_connect) notice: Connecting to corosync cluster infrastructure
Oct 31 09:26:02 node-1 pacemaker-execd [17071] (crm_log_init) info: Changed active directory to /var/lib/pacemaker/cores
Oct 31 09:26:02 node-1 pacemakerd [17067] (pcmk_child_exit) warning: Shutting cluster down because pacemaker-controld[17074] had fatal failure
Oct 31 09:26:02 node-1 pacemakerd [17067] (pcmk_shutdown_worker) notice: Shutting down Pacemaker
Oct 31 09:26:02 node-1 pacemakerd [17067] (pcmk_shutdown_worker) debug: pacemaker-controld confirmed stopped
Oct 31 09:26:02 node-1 pacemakerd [17067] (pcmk_shutdown_worker) debug: pacemaker-schedulerd confirmed stopped
Oct 31 09:26:02 node-1 pacemakerd [17067] (pcmk_shutdown_worker) debug: pacemaker-attrd confirmed stopped
Oct 31 09:26:02 node-1 pacemakerd [17067] (stop_child) notice: Stopping pacemaker-execd | sent signal 15 to process 17071
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (qb_rb_open_2) debug: shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (qb_rb_open_2) debug: shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (qb_rb_open_2) debug: shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (get_local_nodeid) debug: Local nodeid is 1
Oct 31 09:26:02 node-1 pacemakerd [17067] (pcmk_child_exit) error: pacemaker-execd[17071] terminated with signal 15 (Terminated)
Oct 31 09:26:02 node-1 pacemakerd [17067] (pcmk_shutdown_worker) debug: pacemaker-execd confirmed stopped
Oct 31 09:26:02 node-1 pacemakerd [17067] (stop_child) notice: Stopping pacemaker-fenced | sent signal 15 to process 17070
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (qb_rb_open_2) debug: shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (qb_rb_open_2) debug: shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (qb_rb_open_2) debug: shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (qb_ipcc_disconnect) debug: qb_ipcc_disconnect()
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (qb_rb_close_helper) debug: Closing ringbuffer: /dev/shm/qb-17029-17070-21-F1u0q0/qb-request-cmap-header
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (qb_rb_close_helper) debug: Closing ringbuffer: /dev/shm/qb-17029-17070-21-F1u0q0/qb-response-cmap-header
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (qb_rb_close_helper) debug: Closing ringbuffer: /dev/shm/qb-17029-17070-21-F1u0q0/qb-event-cmap-header
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (crm_get_peer) info: Created entry 94abc6a3-6fe9-4136-9a24-d4afe842207d/0x5578c5a1f5c0 for node node-1/1 (1 total)
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (crm_get_peer) info: Node 1 is now known as node-1
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (st_peer_update_callback) debug: Broadcasting our uname because of node 1
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (qb_rb_open_2) debug: shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (qb_rb_open_2) debug: shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (qb_rb_open_2) debug: shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (qb_ipcc_disconnect) debug: qb_ipcc_disconnect()
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (qb_rb_close_helper) debug: Closing ringbuffer: /dev/shm/qb-17029-17070-21-Al5jEg/qb-request-cmap-header
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (qb_rb_close_helper) debug: Closing ringbuffer: /dev/shm/qb-17029-17070-21-Al5jEg/qb-response-cmap-header
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (qb_rb_close_helper) debug: Closing ringbuffer: /dev/shm/qb-17029-17070-21-Al5jEg/qb-event-cmap-header
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (qb_rb_open_2) debug: shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (qb_rb_open_2) debug: shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (qb_rb_open_2) debug: shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (pcmk__corosync_has_nodelist) debug: Corosync has node list
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (qb_ipcc_disconnect) debug: qb_ipcc_disconnect()
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (qb_rb_close_helper) debug: Closing ringbuffer: /dev/shm/qb-17029-17070-21-B1A1Sw/qb-request-cmap-header
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (qb_rb_close_helper) debug: Closing ringbuffer: /dev/shm/qb-17029-17070-21-B1A1Sw/qb-response-cmap-header
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (qb_rb_close_helper) debug: Closing ringbuffer: /dev/shm/qb-17029-17070-21-B1A1Sw/qb-event-cmap-header
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (crm_get_peer) info: Node 1 has uuid 1
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (crm_update_peer_proc) info: cluster_connect_cpg: Node node-1[1] - corosync-cpg is now online
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (update_peer_state_iter) notice: Node node-1 state is now member | nodeid=1 previous=unknown source=crm_update_peer_proc
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (st_peer_update_callback) debug: Broadcasting our uname because of node 1
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (pcmk__corosync_connect) info: Connection to corosync established
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (qb_rb_open_2) debug: shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (qb_rb_open_2) debug: shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (qb_rb_open_2) debug: shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (qb_ipcc_disconnect) debug: qb_ipcc_disconnect()
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (qb_rb_close_helper) debug: Closing ringbuffer: /dev/shm/qb-17029-17070-21-cP0m9M/qb-request-cmap-header
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (qb_rb_close_helper) debug: Closing ringbuffer: /dev/shm/qb-17029-17070-21-cP0m9M/qb-response-cmap-header
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (qb_rb_close_helper) debug: Closing ringbuffer: /dev/shm/qb-17029-17070-21-cP0m9M/qb-event-cmap-header
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (crm_ipc_connect) debug: Could not establish cib_rw IPC connection: Connection refused (111)
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (pcmk__add_mainloop_ipc) debug: Connection to cib_rw failed: 111
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (cib_native_signon_raw) info: Could not connect to CIB manager for stonithd
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (cib_native_signon_raw) info: Connection to CIB manager for stonithd failed: Transport endpoint is not connected
Oct 31 09:26:02 node-1 pacemaker-fenced [17070] (cib_native_signoff) debug: Disconnecting from the CIB manager
Oct 31 09:26:03 node-1 pacemaker-fenced [17070] (crm_ipc_connect) debug: Could not establish cib_rw IPC connection: Connection refused (111)
Oct 31 09:26:03 node-1 pacemaker-fenced [17070] (pcmk__add_mainloop_ipc) debug: Connection to cib_rw failed: 111
Oct 31 09:26:03 node-1 pacemaker-fenced [17070] (cib_native_signon_raw) info: Could not connect to CIB manager for stonithd
Oct 31 09:26:03 node-1 pacemaker-fenced [17070] (cib_native_signon_raw) info: Connection to CIB manager for stonithd failed: Transport endpoint is not connected
Oct 31 09:26:03 node-1 pacemaker-fenced [17070] (cib_native_signoff) debug: Disconnecting from the CIB manager
Oct 31 09:26:04 node-1 pacemakerd [17067] (pcmk__ipc_is_authentic_process_active) info: Could not connect to cib_ro IPC: Connection refused
Oct 31 09:26:05 node-1 pacemakerd [17067] (pcmk__ipc_is_authentic_process_active) info: Could not connect to stonith-ng IPC: Connection refused
Oct 31 09:26:05 node-1 pacemakerd [17067] (check_next_subdaemon) notice: pacemaker-fenced[17070] is unresponsive to ipc after 1 tries
Oct 31 09:26:05 node-1 pacemaker-fenced [17070] (crm_ipc_connect) debug: Could not establish cib_rw IPC connection: Connection refused (111)
Oct 31 09:26:05 node-1 pacemaker-fenced [17070] (pcmk__add_mainloop_ipc) debug: Connection to cib_rw failed: 111
Oct 31 09:26:05 node-1 pacemaker-fenced [17070] (cib_native_signon_raw) info: Could not connect to CIB manager for stonithd
Oct 31 09:26:05 node-1 pacemaker-fenced [17070] (cib_native_signon_raw) info: Connection to CIB manager for stonithd failed: Transport endpoint is not connected
Oct 31 09:26:05 node-1 pacemaker-fenced [17070] (cib_native_signoff) debug: Disconnecting from the CIB manager
Oct 31 09:26:06 node-1 pacemakerd [17067] (pcmk__ipc_is_authentic_process_active) info: Could not connect to lrmd IPC: Connection refused
Oct 31 09:26:07 node-1 pacemakerd [17067] (pcmk__ipc_is_authentic_process_active) info: Could not connect to attrd IPC: Connection refused
Oct 31 09:26:08 node-1 pacemakerd [17067] (pcmk__ipc_is_authentic_process_active) info: Could not connect to pengine IPC: Connection refused
Oct 31 09:26:08 node-1 pacemaker-fenced [17070] (crm_ipc_connect) debug: Could not establish cib_rw IPC connection: Connection refused (111)
Oct 31 09:26:08 node-1 pacemaker-fenced [17070] (pcmk__add_mainloop_ipc) debug: Connection to cib_rw failed: 111
Oct 31 09:26:08 node-1 pacemaker-fenced [17070] (cib_native_signon_raw) info: Could not connect to CIB manager for stonithd
Oct 31 09:26:08 node-1 pacemaker-fenced [17070] (cib_native_signon_raw) info: Connection to CIB manager for stonithd failed: Transport endpoint is not connected
Oct 31 09:26:08 node-1 pacemaker-fenced [17070] (cib_native_signoff) debug: Disconnecting from the CIB manager
Oct 31 09:26:09 node-1 pacemakerd [17067] (pcmk__ipc_is_authentic_process_active) info: Could not connect to crmd IPC: Connection refused
Oct 31 09:26:10 node-1 pacemakerd [17067] (pcmk__ipc_is_authentic_process_active) info: Could not connect to cib_ro IPC: Connection refused
Oct 31 09:26:11 node-1 pacemakerd [17067] (pcmk__ipc_is_authentic_process_active) info: Could not connect to stonith-ng IPC: Connection refused
Oct 31 09:26:11 node-1 pacemakerd [17067] (check_next_subdaemon) notice: pacemaker-fenced[17070] is unresponsive to ipc after 2 tries
Oct 31 09:26:12 node-1 pacemakerd [17067] (pcmk__ipc_is_authentic_process_active) info: Could not connect to lrmd IPC: Connection refused
Oct 31 09:26:12 node-1 pacemaker-fenced [17070] (crm_ipc_connect) debug: Could not establish cib_rw IPC connection: Connection refused (111)
Oct 31 09:26:12 node-1 pacemaker-fenced [17070] (pcmk__add_mainloop_ipc) debug: Connection to cib_rw failed: 111
Oct 31 09:26:12 node-1 pacemaker-fenced [17070] (cib_native_signon_raw) info: Could not connect to CIB manager for stonithd
Oct 31 09:26:12 node-1 pacemaker-fenced [17070] (cib_native_signon_raw) info: Connection to CIB manager for stonithd failed: Transport endpoint is not connected
Oct 31 09:26:12 node-1 pacemaker-fenced [17070] (cib_native_signoff) debug: Disconnecting from the CIB manager
Oct 31 09:26:12 node-1 pacemaker-fenced [17070] (setup_cib) error: Could not connect to the CIB manager: Transport endpoint is not connected (-107)
Oct 31 09:26:12 node-1 pacemaker-fenced [17070] (qb_ipcs_us_publish) info: server name: stonith-ng
Oct 31 09:26:12 node-1 pacemaker-fenced [17070] (main) notice: Pacemaker fencer successfully started and accepting connections
Oct 31 09:26:12 node-1 pacemaker-fenced [17070] (crm_signal_dispatch) notice: Caught 'Terminated' signal | 15 (invoking handler)
Oct 31 09:26:12 node-1 pacemaker-fenced [17070] (stonith_shutdown) info: Terminating with 0 clients
Oct 31 09:26:12 node-1 pacemaker-fenced [17070] (cib_client_del_notify_callback) debug: The callback of the event does not exist(cib_diff_notify)
Oct 31 09:26:12 node-1 pacemaker-fenced [17070] (cib_native_signoff) debug: Disconnecting from the CIB manager
Oct 31 09:26:12 node-1 pacemaker-fenced [17070] (qb_ipcs_us_withdraw) info: withdrawing server sockets
Oct 31 09:26:12 node-1 pacemaker-fenced [17070] (qb_ipcs_unref) debug: qb_ipcs_unref() - destroying
Oct 31 09:26:12 node-1 pacemaker-fenced [17070] (crm_exit) info: Exiting pacemaker-fenced | with status 0
Oct 31 09:26:12 node-1 pacemakerd [17067] (pcmk_child_exit) info: pacemaker-fenced[17070] exited with status 0 (OK)
Oct 31 09:26:12 node-1 pacemakerd [17067] (pcmk_shutdown_worker) debug: pacemaker-fenced confirmed stopped
Oct 31 09:26:12 node-1 pacemakerd [17067] (pcmk_shutdown_worker) debug: pacemaker-based confirmed stopped
Oct 31 09:26:12 node-1 pacemakerd [17067] (pcmk_shutdown_worker) notice: Shutdown complete
Oct 31 09:26:12 node-1 pacemakerd [17067] (pcmk_shutdown_worker) notice: Shutting down and staying down after fatal error
Oct 31 09:26:12 node-1 pacemakerd [17067] (pcmkd_shutdown_corosync) info: Asking Corosync to shut down
Oct 31 09:26:12 node-1 pacemakerd [17067] (qb_ipcc_disconnect) debug: qb_ipcc_disconnect()
Oct 31 09:26:12 node-1 pacemakerd [17067] (qb_ipc_us_ready) debug: poll(fd 9) got POLLHUP
Oct 31 09:26:12 node-1 pacemakerd [17067] (_check_connection_state_with) debug: interpreting result -107 (from socket) as a disconnect: Transport endpoint is not connected (107)
Oct 31 09:26:12 node-1 pacemakerd [17067] (qb_rb_close_helper) debug: Free'ing ringbuffer: /dev/shm/qb-17029-17067-19-v8Ep5t/qb-request-cfg-header
Oct 31 09:26:12 node-1 pacemakerd [17067] (qb_rb_close_helper) debug: Free'ing ringbuffer: /dev/shm/qb-17029-17067-19-v8Ep5t/qb-response-cfg-header
Oct 31 09:26:12 node-1 pacemakerd [17067] (qb_sys_unlink_or_truncate_at) debug: Unlinking file at dir: qb-response-cfg-data: No such file or directory (2)
Oct 31 09:26:12 node-1 pacemakerd [17067] (qb_sys_unlink_or_truncate_at) debug: Unlinking file at dir: qb-response-cfg-header: No such file or directory (2)
Oct 31 09:26:12 node-1 pacemakerd [17067] (qb_rb_close_helper) debug: Free'ing ringbuffer: /dev/shm/qb-17029-17067-19-v8Ep5t/qb-event-cfg-header
Oct 31 09:26:12 node-1 pacemakerd [17067] (crm_exit) info: Exiting pacemakerd | with status 100
-------------- next part --------------
totem {
version: 2
configure: error: At least one cluster stack must be supported
# Set name of the cluster
cluster_name: ExampleCluster
secauth: off
# crypto_cipher and crypto_hash: Used for mutual node authentication.
# If you choose to enable this, then do remember to create a shared
# secret with "corosync-keygen".
# enabling crypto_cipher, requires also enabling of crypto_hash.
# crypto works only with knet transport
crypto_cipher: none
crypto_hash: none
transport:udpu
}
interface {
ringnumber: 0 #回环号码,若主机有多块网卡,避免心跳汇流
bindnetaddr: 60.60.60.0 #心跳网段,corosync会自动判断本地网卡上配置的哪个IP地址是属于这个网络的,并把这个接口作为多播心跳信息传递的接口
mcastaddr: 226.94.1.1 #心跳信息组播地址(所有节点必须一致)
mcastport: 5405 #组播端口
ttl: 1 #只向外多播ttl为1的报文,防止发生环路
}
logging {
# Log the source file and line where messages are being
# generated. When in doubt, leave off. Potentially useful for
# debugging.
fileline: on
# Log to standard error. When in doubt, set to yes. Useful when
# running in the foreground (when invoking "corosync -f")
to_stderr: yes
# Log to a log file. When set to "no", the "logfile" option
# must not be set.
to_logfile: yes
logfile: /var/log/cluster/corosync.log
# Log to the system log daemon. When in doubt, set to yes.
to_syslog: yes
# Log debug messages (very verbose). When in doubt, leave off.
debug: on
# Log messages with time stamps. When in doubt, set to hires (or on)
#timestamp: hires
logger_subsys {
subsys: QUORUM
debug: on
}
}
quorum {
# Enable and configure quorum subsystem (default: off)
# see also corosync.conf.5 and votequorum.5
provider: corosync_votequorum
}
nodelist {
# Change/uncomment/add node sections to match cluster configuration
node {
# Hostname of the node
name: node-1
# Cluster membership node identifier
nodeid: 1
# Address of first link
ring0_addr: node-1
# When knet transport is used it's possible to define up to 8 links
#ring1_addr: 60.60.60.84
}
node {
# Hostname of the node
name: node-2
# Cluster membership node identifier
nodeid: 2
# Address of first link
ring0_addr: node-2
# When knet transport is used it's possible to define up to 8 links
#ring1_addr: 60.60.60.119
}
# ...
service {
var: 0
name: pacemaker
}
}
More information about the Developers
mailing list