[Pacemaker] Peers see each other, but never successfully elect a DC.

D. J. Draper draperd7772 at hotmail.com
Fri Jan 29 00:12:33 EST 2010


Hi guys and gals. First time posting here, and I've either got a really simple issue or a whopper of a problem, as extensive Googling failed to return any other instances of anyone encountering this problem.

Everything I've read about using Pacemaker over the OpenAIS stack pretty much states you just write a valid pair of corosync.conf files (openais parser is apparently broken right now), update /etc/init.d/openais to use the experimental corosync parser, fire up the service, and two nodes should form a cluster. 

Well, I do the above, successfully fire up the openais service on both nodes, then tail /var/log/messages. Both nodes report successfully connecting to the CIB, log seeing each other, and even send each other join invitations. But their counterparts never acknowledge the invitation nor do they elect a DC:

-bash-4.0# crm status
============
Last updated: Thu Jan 28 22:48:18 2010
Stack: openais
Current DC: NONE
0 Nodes configured, unknown expected votes
0 Resources configured.
============

Specifics:
Hardware: Marvell Kirkwood - OpenRD Client
Distro: Fedora 11 arm
Kernel: Linux 2.6.33

I guess the next order of business is to state what versions of code I'm running, and show how I compiled them...

----------------------------------------
-bash-4.0# cat ~/compileclustering.sh 
mkdir /mnt/taxi
mount /dev/mmcblk0p1 /mnt/taxi
cp /mnt/taxi/ig/conf/local.repo /etc/yum.repos.d/

yum -y install tar wget make rpm-build yum-utils perl-TimeDate createrepo

mkdir /usr/src/clustering
cd /usr/src/clustering

wget http://clusterlabs.org/rpm/fedora-11/src/cluster-glue-1.0.1-1.fc11.src.rpm
yum-builddep -y cluster-glue-1.0.1-1.fc11.src.rpm
rpmbuild --rebuild cluster-glue-1.0.1-1.fc11.src.rpm
createrepo /root/rpmbuild/RPMS/armv5tel
yum clean all
wget http://clusterlabs.org/rpm/fedora-11/src/resource-agents-1.0.1-1.fc11.src.rpm
yum-builddep -y resource-agents-1.0.1-1.fc11.src.rpm
rpmbuild --rebuild resource-agents-1.0.1-1.fc11.src.rpm
createrepo /root/rpmbuild/RPMS/armv5tel
yum clean all
wget http://clusterlabs.org/rpm/fedora-11/src/corosync-1.2.0-1.fc11.src.rpm
yum-builddep -y corosync-1.2.0-1.fc11.src.rpm
rpmbuild --rebuild corosync-1.2.0-1.fc11.src.rpm
createrepo /root/rpmbuild/RPMS/armv5tel
yum clean all
wget http://clusterlabs.org/rpm/fedora-11/src/openais-1.1.0-1.fc11.src.rpm
yum-builddep -y openais-1.1.0-1.fc11.src.rpm
rpmbuild --rebuild openais-1.1.0-1.fc11.src.rpm
createrepo /root/rpmbuild/RPMS/armv5tel
yum clean all
wget http://clusterlabs.org/rpm/fedora-11/src/heartbeat-3.0.1-1.fc11.src.rpm
yum-builddep -y heartbeat-3.0.1-1.fc11.src.rpm
rpmbuild --rebuild heartbeat-3.0.1-1.fc11.src.rpm
createrepo /root/rpmbuild/RPMS/armv5tel
yum clean all
wget http://clusterlabs.org/rpm/fedora-11/src/pacemaker-1.0.7-2.fc11.src.rpm
yum-builddep -y pacemaker-1.0.7-2.fc11.src.rpm 
rpmbuild --rebuild pacemaker-1.0.7-2.fc11.src.rpm
wget http://clusterlabs.org/rpm/fedora-11/src/drbd-8.3.6-1.fc11.src.rpm
createrepo /root/rpmbuild/RPMS/armv5tel
yum clean all
wget http://clusterlabs.org/rpm/fedora-11/src/drbd-8.3.6-1.fc11.src.rpm
yum-builddep -y drbd-8.3.6-1.fc11.src.rpm
rpmbuild --rebuild drbd-8.3.6-1.fc11.src.rpm
createrepo /root/rpmbuild/RPMS/armv5tel
yum clean all

yum -y install pacemaker openais drbd-utils drbd-udev drbd-heartbeat drbd-bash
----------------------------------------

Next, how about the /etc/corosync.conf file:

----------------------------------------
-bash-4.0# cat /etc/corosync/corosync.conf
# Please read the corosync.conf.5 manual page
aisexec {
    group:            root
    user:            root
}
amf    {
    mode:            disabled
}
compatibility: whitetank
logging {
    debug:            on
        logfile:        /tmp/corosync.log
    logger_subsys {
        subsys:        AMF
        debug:        off
    }
    syslog_facility:    daemon
    timestamp:        on
    to_logfile:        yes
    to_stderr:        yes
    to_syslog:        yes
}
service {
    name:            pacemaker
    use_mgmtd:        yes
    ver:            0
}
totem {
    autojoin:        yes
    clear_node_high_bit:    yes
    consensus:        1500
    hold:            180
    interface {
        ringnumber:    0
        bindnetaddr:    192.168.67.1
        mcastaddr:    226.94.1.1
        mcastport:    5405
    }
        interface {
                ringnumber:    1
                bindnetaddr:    192.168.2.1
                mcastaddr:    226.94.2.1
                mcastport:    5405
        }
    join:            60
    max messages:        20
    rrp_mode:        passive
        secauth:        off
    token:            1000
    token_retransmits_before_loss_const: 20
        threads:        0
    version:        2
    vsftype:        ykd
}
----------------------------------------

Now, node01's /tmp/corosync.log
(Sorry, this is huge)

----------------------------------------
-bash-4.0# cat /tmp/corosync.log
Jan 28 22:58:16 corosync [MAIN  ] Corosync Cluster Engine ('1.2.0'): started and ready to provide service.
Jan 28 22:58:16 corosync [MAIN  ] Corosync built-in features: nss rdma
Jan 28 22:58:16 corosync [MAIN  ] Successfully configured openais services to load
Jan 28 22:58:16 corosync [MAIN  ] Successfully read main configuration file '/etc/corosync/corosync.conf'.
Jan 28 22:58:16 corosync [TOTEM ] Token Timeout (1000 ms) retransmit timeout (49 ms)
Jan 28 22:58:16 corosync [TOTEM ] token hold (180 ms) retransmits before loss (20 retrans)
Jan 28 22:58:16 corosync [TOTEM ] join (60 ms) send_join (0 ms) consensus (1500 ms) merge (200 ms)
Jan 28 22:58:16 corosync [TOTEM ] downcheck (1000 ms) fail to recv const (50 msgs)
Jan 28 22:58:16 corosync [TOTEM ] seqno unchanged const (30 rotations) Maximum network MTU 1402
Jan 28 22:58:16 corosync [TOTEM ] window size per rotation (50 messages) maximum messages per rotation (17 messages)
Jan 28 22:58:16 corosync [TOTEM ] send threads (0 threads)
Jan 28 22:58:16 corosync [TOTEM ] RRP token expired timeout (49 ms)
Jan 28 22:58:16 corosync [TOTEM ] RRP token problem counter (2000 ms)
Jan 28 22:58:16 corosync [TOTEM ] RRP threshold (10 problem count)
Jan 28 22:58:16 corosync [TOTEM ] RRP mode set to passive.
Jan 28 22:58:16 corosync [TOTEM ] heartbeat_failures_allowed (0)
Jan 28 22:58:16 corosync [TOTEM ] max_network_delay (50 ms)
Jan 28 22:58:16 corosync [TOTEM ] HeartBeat is Disabled. To enable set heartbeat_failures_allowed > 0
Jan 28 22:58:16 corosync [TOTEM ] Initializing transport (UDP/IP).
Jan 28 22:58:16 corosync [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
Jan 28 22:58:16 corosync [TOTEM ] Initializing transport (UDP/IP).
Jan 28 22:58:16 corosync [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
Jan 28 22:58:16 corosync [MAIN  ] Compatibility mode set to whitetank.  Using V1 and V2 of the synchronization engine.
Jan 28 22:58:16 corosync [IPC   ] you are using ipc api v2
Jan 28 22:58:16 corosync [TOTEM ] Receive multicast socket recv buffer size (217088 bytes).
Jan 28 22:58:16 corosync [TOTEM ] Transmit multicast socket send buffer size (217088 bytes).
Jan 28 22:58:16 corosync [TOTEM ] The network interface [192.168.67.11] is now up.
Jan 28 22:58:16 corosync [TOTEM ] Created or loaded sequence id 24.192.168.67.11 for this ring.
Jan 28 22:58:16 corosync [SERV  ] Service engine loaded: openais cluster membership service B.01.01
Jan 28 22:58:16 corosync [EVT   ] Evt exec init request
Jan 28 22:58:16 corosync [SERV  ] Service engine loaded: openais event service B.01.01
Jan 28 22:58:16 corosync [SERV  ] Service engine loaded: openais checkpoint service B.01.01
Jan 28 22:58:16 corosync [SERV  ] Service engine loaded: openais availability management framework B.01.01
Jan 28 22:58:16 corosync [MSG   ] [DEBUG]: msg_exec_init_fn
Jan 28 22:58:16 corosync [SERV  ] Service engine loaded: openais message service B.03.01
Jan 28 22:58:16 corosync [LCK   ] [DEBUG]: lck_exec_init_fn
Jan 28 22:58:16 corosync [SERV  ] Service engine loaded: openais distributed locking service B.03.01
Jan 28 22:58:16 corosync [SERV  ] Service engine loaded: openais timer service A.01.01
Jan 28 22:58:16 corosync [pcmk  ] info: process_ais_conf: Reading configure
Jan 28 22:58:16 corosync [pcmk  ] info: config_find_init: Local handle: 7685269064754659331 for logging
Jan 28 22:58:16 corosync [pcmk  ] info: config_find_next: Processing additional logging options...
Jan 28 22:58:16 corosync [pcmk  ] info: get_config_opt: Found 'on' for option: debug
Jan 28 22:58:16 corosync [pcmk  ] info: get_config_opt: Defaulting to 'off' for option: to_file
Jan 28 22:58:16 corosync [pcmk  ] info: get_config_opt: Found 'daemon' for option: syslog_facility
Jan 28 22:58:16 corosync [pcmk  ] info: config_find_init: Local handle: 8535092201842016260 for service
Jan 28 22:58:16 corosync [pcmk  ] info: config_find_next: Processing additional service options...
Jan 28 22:58:16 corosync [pcmk  ] info: config_find_next: Processing additional service options...
Jan 28 22:58:16 corosync [pcmk  ] info: config_find_next: Processing additional service options...
Jan 28 22:58:16 corosync [pcmk  ] info: config_find_next: Processing additional service options...
Jan 28 22:58:16 corosync [pcmk  ] info: config_find_next: Processing additional service options...
Jan 28 22:58:16 corosync [pcmk  ] info: config_find_next: Processing additional service options...
Jan 28 22:58:16 corosync [pcmk  ] info: config_find_next: Processing additional service options...
Jan 28 22:58:16 corosync [pcmk  ] info: config_find_next: Processing additional service options...
Jan 28 22:58:16 corosync [pcmk  ] info: get_config_opt: Defaulting to 'pcmk' for option: clustername
Jan 28 22:58:17 corosync [pcmk  ] info: get_config_opt: Defaulting to 'no' for option: use_logd
Jan 28 22:58:17 corosync [pcmk  ] info: get_config_opt: Found 'yes' for option: use_mgmtd
Jan 28 22:58:17 corosync [pcmk  ] info: pcmk_startup: CRM: Initialized
Jan 28 22:58:17 corosync [pcmk  ] Logging: Initialized pcmk_startup
Jan 28 22:58:17 corosync [pcmk  ] info: pcmk_startup: Maximum core file size is: 4294967295
Jan 28 22:58:17 corosync [pcmk  ] info: pcmk_startup: Service: 9
Jan 28 22:58:17 corosync [pcmk  ] info: pcmk_startup: Local hostname: node01.houseofdraper.org
Jan 28 22:58:17 corosync [pcmk  ] info: pcmk_update_nodeid: Local node id: 188983488
Jan 28 22:58:17 corosync [pcmk  ] info: update_member: Creating entry for node 188983488 born on 0
Jan 28 22:58:17 corosync [pcmk  ] info: update_member: 0x86cf0 Node 188983488 now known as node01.houseofdraper.org (was: (null))
Jan 28 22:58:17 corosync [pcmk  ] info: update_member: Node node01.houseofdraper.org now has 1 quorum votes (was 0)
Jan 28 22:58:17 corosync [pcmk  ] info: update_member: Node 188983488/node01.houseofdraper.org is now: member
Jan 28 22:58:17 corosync [pcmk  ] info: spawn_child: Forked child 19331 for process stonithd
Jan 28 22:58:17 corosync [pcmk  ] info: spawn_child: Forked child 19332 for process cib
Jan 28 22:58:17 corosync [pcmk  ] info: spawn_child: Forked child 19333 for process lrmd
Jan 28 22:58:17 corosync [pcmk  ] info: spawn_child: Forked child 19334 for process attrd
Jan 28 22:58:17 corosync [pcmk  ] info: spawn_child: Forked child 19335 for process pengine
Jan 28 22:58:17 corosync [pcmk  ] info: spawn_child: Forked child 19336 for process crmd
Jan 28 22:58:17 corosync [pcmk  ] info: spawn_child: Forked child 19337 for process mgmtd
Jan 28 22:58:17 corosync [SERV  ] Service engine loaded: Pacemaker Cluster Manager 1.0.7
Jan 28 22:58:17 corosync [SERV  ] Service engine loaded: corosync extended virtual synchrony service
Jan 28 22:58:17 corosync [SERV  ] Service engine loaded: corosync configuration service
Jan 28 22:58:17 corosync [SERV  ] Service engine loaded: corosync cluster closed process group service v1.01
Jan 28 22:58:17 corosync [SERV  ] Service engine loaded: corosync cluster config database access v1.01
Jan 28 22:58:17 corosync [SERV  ] Service engine loaded: corosync profile loading service
Jan 28 22:58:17 corosync [SERV  ] Service engine loaded: corosync cluster quorum service v0.1
Jan 28 22:58:17 corosync [TOTEM ] Receive multicast socket recv buffer size (217088 bytes).
Jan 28 22:58:17 corosync [TOTEM ] Transmit multicast socket send buffer size (217088 bytes).
Jan 28 22:58:17 corosync [TOTEM ] The network interface [192.168.2.11] is now up.
Jan 28 22:58:17 corosync [TOTEM ] entering GATHER state from 15.
Jan 28 22:58:17 corosync [TOTEM ] Creating commit token because I am the rep.
Jan 28 22:58:17 corosync [TOTEM ] Saving state aru 0 high seq received 0
Jan 28 22:58:17 corosync [TOTEM ] Storing new sequence id for ring 1c
Jan 28 22:58:17 corosync [TOTEM ] entering COMMIT state.
Jan 28 22:58:17 corosync [TOTEM ] got commit token
Jan 28 22:58:17 corosync [TOTEM ] entering RECOVERY state.
Jan 28 22:58:17 corosync [TOTEM ] position [0] member 192.168.67.11:
Jan 28 22:58:17 corosync [TOTEM ] previous ring seq 24 rep 192.168.67.11
Jan 28 22:58:17 corosync [TOTEM ] aru 0 high delivered 0 received flag 1
Jan 28 22:58:17 corosync [TOTEM ] Did not need to originate any messages in recovery.
Jan 28 22:58:17 corosync [TOTEM ] got commit token
Jan 28 22:58:17 corosync [TOTEM ] Sending initial ORF token
Jan 28 22:58:17 corosync [TOTEM ] token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 0, aru 0
Jan 28 22:58:17 corosync [TOTEM ] install seq 0 aru 0 high seq received 0
Jan 28 22:58:17 corosync [TOTEM ] token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 1, aru 0
Jan 28 22:58:17 corosync [TOTEM ] install seq 0 aru 0 high seq received 0
Jan 28 22:58:17 corosync [TOTEM ] token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 2, aru 0
Jan 28 22:58:17 corosync [TOTEM ] install seq 0 aru 0 high seq received 0
Jan 28 22:58:17 corosync [TOTEM ] token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 3, aru 0
Jan 28 22:58:17 corosync [TOTEM ] install seq 0 aru 0 high seq received 0
Jan 28 22:58:17 corosync [TOTEM ] retrans flag count 4 token aru 0 install seq 0 aru 0 0
Jan 28 22:58:17 corosync [TOTEM ] recovery to regular 1-0
Jan 28 22:58:17 corosync [TOTEM ] Delivering to app 1 to 0
Jan 28 22:58:17 corosync [CLM   ] CLM CONFIGURATION CHANGE
Jan 28 22:58:17 corosync [CLM   ] New Configuration:
Jan 28 22:58:17 corosync [CLM   ] Members Left:
Jan 28 22:58:17 corosync [CLM   ] Members Joined:
Jan 28 22:58:17 corosync [EVT   ] Evt conf change 1
Jan 28 22:58:17 corosync [EVT   ] m 0, j 0 l 0
Jan 28 22:58:17 corosync [LCK   ] [DEBUG]: lck_confchg_fn
Jan 28 22:58:17 corosync [MSG   ] [DEBUG]: msg_confchg_fn
Jan 28 22:58:17 corosync [pcmk  ] notice: pcmk_peer_update: Transitional membership event on ring 28: memb=0, new=0, lost=0
Jan 28 22:58:17 corosync [CLM   ] CLM CONFIGURATION CHANGE
Jan 28 22:58:17 corosync [CLM   ] New Configuration:
Jan 28 22:58:17 corosync [CLM   ]     r(0) ip(192.168.67.11) r(1) ip(192.168.2.11) 
Jan 28 22:58:17 corosync [CLM   ] Members Left:
Jan 28 22:58:17 corosync [CLM   ] Members Joined:
Jan 28 22:58:17 corosync [CLM   ]     r(0) ip(192.168.67.11) r(1) ip(192.168.2.11) 
Jan 28 22:58:17 corosync [EVT   ] Evt conf change 0
Jan 28 22:58:17 corosync [EVT   ] m 1, j 1 l 0
Jan 28 22:58:17 corosync [LCK   ] [DEBUG]: lck_confchg_fn
Jan 28 22:58:17 corosync [MSG   ] [DEBUG]: msg_confchg_fn
Jan 28 22:58:17 corosync [pcmk  ] notice: pcmk_peer_update: Stable membership event on ring 28: memb=1, new=1, lost=0
Jan 28 22:58:17 corosync [pcmk  ] info: pcmk_peer_update: NEW:  node01.houseofdraper.org 188983488
Jan 28 22:58:17 corosync [pcmk  ] debug: pcmk_peer_update: Node 188983488 has address r(0) ip(192.168.67.11) r(1) ip(192.168.2.11) 
Jan 28 22:58:17 corosync [pcmk  ] info: pcmk_peer_update: MEMB: node01.houseofdraper.org 188983488
Jan 28 22:58:17 corosync [pcmk  ] debug: send_cluster_id: Local update: id=188983488, born=0, seq=28
Jan 28 22:58:17 corosync [pcmk  ] info: update_member: Node node01.houseofdraper.org now has process list: 00000000000000000000000000053312 (340754)
Jan 28 22:58:17 corosync [SYNC  ] This node is within the primary component and will provide service.
Jan 28 22:58:17 corosync [TOTEM ] entering OPERATIONAL state.
Jan 28 22:58:17 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.
Jan 28 22:58:17 corosync [TOTEM ] mcasted message added to pending queue
Jan 28 22:58:17 corosync [TOTEM ] Delivering 0 to 1
Jan 28 22:58:17 corosync [TOTEM ] Delivering MCAST message with seq 1 to pending delivery queue
Jan 28 22:58:17 corosync [pcmk  ] debug: pcmk_cluster_id_callback: Node update: node01.houseofdraper.org (1.0.7)
Jan 28 22:58:17 corosync [SYNC  ] confchg entries 1
Jan 28 22:58:17 corosync [SYNC  ] Barrier Start Received From 188983488
Jan 28 22:58:17 corosync [SYNC  ] Barrier completion status for nodeid 188983488 = 1. 
Jan 28 22:58:17 corosync [SYNC  ] Synchronization barrier completed
Jan 28 22:58:17 corosync [SYNC  ] Synchronization actions starting for (openais cluster membership service B.01.01)
Jan 28 22:58:17 corosync [TOTEM ] mcasted message added to pending queue
Jan 28 22:58:17 corosync [TOTEM ] Delivering 1 to 2
Jan 28 22:58:17 corosync [TOTEM ] Delivering MCAST message with seq 2 to pending delivery queue
Jan 28 22:58:17 corosync [CLM   ] got nodejoin message 192.168.67.11
Jan 28 22:58:17 corosync [TOTEM ] mcasted message added to pending queue
Jan 28 22:58:17 corosync [TOTEM ] releasing messages up to and including 1
Jan 28 22:58:17 corosync [TOTEM ] Delivering 2 to 3
Jan 28 22:58:17 corosync [TOTEM ] Delivering MCAST message with seq 3 to pending delivery queue
Jan 28 22:58:17 corosync [SYNC  ] confchg entries 1
Jan 28 22:58:17 corosync [SYNC  ] Barrier Start Received From 188983488
Jan 28 22:58:17 corosync [SYNC  ] Barrier completion status for nodeid 188983488 = 1. 
Jan 28 22:58:17 corosync [SYNC  ] Synchronization barrier completed
Jan 28 22:58:17 corosync [SYNC  ] Committing synchronization for (openais cluster membership service B.01.01)
Jan 28 22:58:17 corosync [SYNC  ] Synchronization actions starting for (dummy AMF service)
Jan 28 22:58:17 corosync [TOTEM ] releasing messages up to and including 2
Jan 28 22:58:17 corosync [TOTEM ] mcasted message added to pending queue
Jan 28 22:58:17 corosync [TOTEM ] releasing messages up to and including 3
Jan 28 22:58:17 corosync [TOTEM ] Delivering 3 to 4
Jan 28 22:58:17 corosync [TOTEM ] Delivering MCAST message with seq 4 to pending delivery queue
Jan 28 22:58:17 corosync [SYNC  ] confchg entries 1
Jan 28 22:58:17 corosync [SYNC  ] Barrier Start Received From 188983488
Jan 28 22:58:17 corosync [SYNC  ] Barrier completion status for nodeid 188983488 = 1. 
Jan 28 22:58:17 corosync [SYNC  ] Synchronization barrier completed
Jan 28 22:58:17 corosync [SYNC  ] Committing synchronization for (dummy AMF service)
Jan 28 22:58:17 corosync [SYNC  ] Synchronization actions starting for (openais checkpoint service B.01.01)
Jan 28 22:58:17 corosync [TOTEM ] mcasted message added to pending queue
Jan 28 22:58:17 corosync [TOTEM ] Delivering 4 to 5
Jan 28 22:58:17 corosync [TOTEM ] Delivering MCAST message with seq 5 to pending delivery queue
Jan 28 22:58:17 corosync [TOTEM ] releasing messages up to and including 4
Jan 28 22:58:17 corosync [TOTEM ] releasing messages up to and including 5
Jan 28 22:58:17 corosync [TOTEM ] mcasted message added to pending queue
Jan 28 22:58:17 corosync [TOTEM ] Delivering 5 to 6
Jan 28 22:58:17 corosync [TOTEM ] Delivering MCAST message with seq 6 to pending delivery queue
Jan 28 22:58:17 corosync [SYNC  ] confchg entries 1
Jan 28 22:58:17 corosync [SYNC  ] Barrier Start Received From 188983488
Jan 28 22:58:17 corosync [SYNC  ] Barrier completion status for nodeid 188983488 = 1. 
Jan 28 22:58:17 corosync [SYNC  ] Synchronization barrier completed
Jan 28 22:58:17 corosync [SYNC  ] Committing synchronization for (openais checkpoint service B.01.01)
Jan 28 22:58:17 corosync [SYNC  ] Synchronization actions starting for (openais event service B.01.01)
Jan 28 22:58:17 corosync [EVT   ] Evt synchronize initialization
Jan 28 22:58:17 corosync [EVT   ] My node ID r(0) ip(192.168.67.11) r(1) ip(192.168.2.11) 
Jan 28 22:58:17 corosync [EVT   ] Process Evt synchronization 
Jan 28 22:58:17 corosync [EVT   ] Send max event ID updates
Jan 28 22:58:17 corosync [EVT   ] Process Evt synchronization 
Jan 28 22:58:17 corosync [EVT   ] Send open count updates
Jan 28 22:58:17 corosync [EVT   ] DONE Sending open counts
Jan 28 22:58:17 corosync [TOTEM ] mcasted message added to pending queue
Jan 28 22:58:17 corosync [TOTEM ] releasing messages up to and including 6
Jan 28 22:58:17 corosync [TOTEM ] Delivering 6 to 7
Jan 28 22:58:17 corosync [TOTEM ] Delivering MCAST message with seq 7 to pending delivery queue
Jan 28 22:58:17 corosync [EVT   ] Remote channel operation request
Jan 28 22:58:17 corosync [EVT   ] my node ID: 0xb43a8c0
Jan 28 22:58:17 corosync [EVT   ] Receive EVT_CONF_CHANGE_DONE from nodeid r(0) ip(192.168.67.11) r(1) ip(192.168.2.11)  members 1 checked in 1
Jan 28 22:58:17 corosync [EVT   ] Process Evt synchronization 
Jan 28 22:58:17 corosync [EVT   ] DONE Sending retained events
Jan 28 22:58:17 corosync [TOTEM ] mcasted message added to pending queue
Jan 28 22:58:17 corosync [TOTEM ] Delivering 7 to 8
Jan 28 22:58:17 corosync [TOTEM ] Delivering MCAST message with seq 8 to pending delivery queue
Jan 28 22:58:17 corosync [EVT   ] Remote channel operation request
Jan 28 22:58:17 corosync [EVT   ] my node ID: 0xb43a8c0
Jan 28 22:58:17 corosync [EVT   ] Receive EVT_CONF_DONE from nodeid r(0) ip(192.168.67.11) r(1) ip(192.168.2.11) , members 1 checked in 1
Jan 28 22:58:17 corosync [EVT   ] Process Evt synchronization 
Jan 28 22:58:17 corosync [EVT   ] Recovery complete
Jan 28 22:58:17 corosync [TOTEM ] releasing messages up to and including 7
Jan 28 22:58:17 corosync [TOTEM ] mcasted message added to pending queue
Jan 28 22:58:17 corosync [TOTEM ] releasing messages up to and including 8
Jan 28 22:58:17 corosync [TOTEM ] Delivering 8 to 9
Jan 28 22:58:17 corosync [TOTEM ] Delivering MCAST message with seq 9 to pending delivery queue
Jan 28 22:58:17 corosync [SYNC  ] confchg entries 1
Jan 28 22:58:17 corosync [SYNC  ] Barrier Start Received From 188983488
Jan 28 22:58:17 corosync [SYNC  ] Barrier completion status for nodeid 188983488 = 1. 
Jan 28 22:58:17 corosync [SYNC  ] Synchronization barrier completed
Jan 28 22:58:17 corosync [EVT   ] Evt synchronize activation
Jan 28 22:58:17 corosync [SYNC  ] Committing synchronization for (openais event service B.01.01)
Jan 28 22:58:17 corosync [SYNC  ] Synchronization actions starting for (corosync cluster closed process group service v1.01)
Jan 28 22:58:17 corosync [TOTEM ] mcasted message added to pending queue
Jan 28 22:58:17 corosync [TOTEM ] mcasted message added to pending queue
Jan 28 22:58:17 corosync [TOTEM ] Delivering 9 to b
Jan 28 22:58:17 corosync [TOTEM ] Delivering MCAST message with seq a to pending delivery queue
Jan 28 22:58:17 corosync [TOTEM ] Delivering MCAST message with seq b to pending delivery queue
Jan 28 22:58:17 corosync [CPG   ] downlist left_list: 0
Jan 28 22:58:17 corosync [TOTEM ] mcasted message added to pending queue
Jan 28 22:58:17 corosync [TOTEM ] releasing messages up to and including 9
Jan 28 22:58:17 corosync [TOTEM ] Delivering b to c
Jan 28 22:58:17 corosync [TOTEM ] Delivering MCAST message with seq c to pending delivery queue
Jan 28 22:58:17 corosync [SYNC  ] confchg entries 1
Jan 28 22:58:17 corosync [SYNC  ] Barrier Start Received From 188983488
Jan 28 22:58:17 corosync [SYNC  ] Barrier completion status for nodeid 188983488 = 1. 
Jan 28 22:58:17 corosync [SYNC  ] Synchronization barrier completed
Jan 28 22:58:17 corosync [SYNC  ] Committing synchronization for (corosync cluster closed process group service v1.01)
Jan 28 22:58:17 corosync [TOTEM ] mcasted message added to pending queue
Jan 28 22:58:17 corosync [TOTEM ] releasing messages up to and including b
Jan 28 22:58:17 corosync [TOTEM ] Delivering c to d
Jan 28 22:58:17 corosync [TOTEM ] Delivering MCAST message with seq d to pending delivery queue
Jan 28 22:58:17 corosync [MAIN  ] Completed service synchronization, ready to provide service.
Jan 28 22:58:17 corosync [TOTEM ] releasing messages up to and including c
Jan 28 22:58:17 corosync [TOTEM ] releasing messages up to and including d
Jan 28 22:58:17 corosync [pcmk  ] info: pcmk_ipc: Recorded connection 0x8afc0 for attrd/19334
Jan 28 22:58:17 corosync [pcmk  ] debug: process_ais_message: Msg[0] (dest=local:ais, from=node01.houseofdraper.org:attrd.19334, remote=true, size=6): 19334
Jan 28 22:58:17 corosync [pcmk  ] info: pcmk_ipc: Recorded connection 0x8a980 for cib/19332
Jan 28 22:58:17 corosync [pcmk  ] info: pcmk_ipc: Sending membership update 28 to cib
Jan 28 22:58:17 corosync [pcmk  ] debug: process_ais_message: Msg[0] (dest=local:ais, from=node01.houseofdraper.org:cib.19332, remote=true, size=6): 19332
Jan 28 22:58:17 corosync [pcmk  ] info: pcmk_ipc: Recorded connection 0x8c1a0 for stonithd/19331
Jan 28 22:58:17 corosync [pcmk  ] debug: process_ais_message: Msg[0] (dest=local:ais, from=node01.houseofdraper.org:stonithd.19331, remote=true, size=6): 19331
Jan 28 22:58:17 corosync [pcmk  ] ERROR: pcmk_wait_dispatch: Child process mgmtd exited (pid=19337, rc=100)
Jan 28 22:58:17 corosync [pcmk  ] notice: pcmk_wait_dispatch: Child process mgmtd no longer wishes to be respawned
Jan 28 22:58:17 corosync [pcmk  ] debug: send_cluster_id: Local update: id=188983488, born=0, seq=28
Jan 28 22:58:17 corosync [pcmk  ] info: update_member: Node node01.houseofdraper.org now has process list: 00000000000000000000000000013312 (78610)
Jan 28 22:58:17 corosync [TOTEM ] mcasted message added to pending queue
Jan 28 22:58:17 corosync [TOTEM ] Delivering d to e
Jan 28 22:58:17 corosync [TOTEM ] Delivering MCAST message with seq e to pending delivery queue
Jan 28 22:58:17 corosync [pcmk  ] debug: pcmk_cluster_id_callback: Node update: node01.houseofdraper.org (1.0.7)
Jan 28 22:58:17 corosync [TOTEM ] releasing messages up to and including e
Jan 28 22:58:18 corosync [pcmk  ] info: pcmk_ipc: Recorded connection 0x8b510 for crmd/19336
Jan 28 22:58:18 corosync [pcmk  ] info: pcmk_ipc: Sending membership update 28 to crmd
Jan 28 22:58:18 corosync [pcmk  ] debug: process_ais_message: Msg[0] (dest=local:ais, from=node01.houseofdraper.org:crmd.19336, remote=true, size=6): 19336
Jan 28 22:58:18 corosync [TOTEM ] mcasted message added to pending queue
Jan 28 22:58:18 corosync [TOTEM ] Delivering e to f
Jan 28 22:58:18 corosync [TOTEM ] Delivering MCAST message with seq f to pending delivery queue
Jan 28 22:58:18 corosync [TOTEM ] releasing messages up to and including f
Jan 28 22:58:18 corosync [pcmk  ] info: update_expected_votes: Expected quorum votes 1024 -> 2
Jan 28 22:58:19 corosync [TOTEM ] mcasted message added to pending queue
Jan 28 22:58:19 corosync [TOTEM ] Delivering f to 10
Jan 28 22:58:19 corosync [TOTEM ] Delivering MCAST message with seq 10 to pending delivery queue
Jan 28 22:58:19 corosync [TOTEM ] releasing messages up to and including 10
Jan 28 22:58:32 corosync [TOTEM ] entering GATHER state from 11.
Jan 28 22:58:32 corosync [TOTEM ] Creating commit token because I am the rep.
Jan 28 22:58:32 corosync [TOTEM ] Saving state aru 10 high seq received 10
Jan 28 22:58:32 corosync [TOTEM ] Storing new sequence id for ring 20
Jan 28 22:58:32 corosync [TOTEM ] entering COMMIT state.
Jan 28 22:58:32 corosync [TOTEM ] got commit token
Jan 28 22:58:32 corosync [TOTEM ] got commit token
Jan 28 22:58:32 corosync [TOTEM ] entering RECOVERY state.
Jan 28 22:58:32 corosync [TOTEM ] position [0] member 192.168.67.11:
Jan 28 22:58:32 corosync [TOTEM ] previous ring seq 28 rep 192.168.67.11
Jan 28 22:58:32 corosync [TOTEM ] aru 10 high delivered 10 received flag 1
Jan 28 22:58:32 corosync [TOTEM ] position [1] member 192.168.67.12:
Jan 28 22:58:32 corosync [TOTEM ] previous ring seq 24 rep 192.168.67.12
Jan 28 22:58:32 corosync [TOTEM ] aru d high delivered d received flag 1
Jan 28 22:58:32 corosync [TOTEM ] Did not need to originate any messages in recovery.
Jan 28 22:58:32 corosync [TOTEM ] got commit token
Jan 28 22:58:32 corosync [TOTEM ] Sending initial ORF token
Jan 28 22:58:32 corosync [TOTEM ] token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 0, aru 0
Jan 28 22:58:32 corosync [TOTEM ] install seq 0 aru 0 high seq received 0
Jan 28 22:58:32 corosync [TOTEM ] token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 1, aru 0
Jan 28 22:58:32 corosync [TOTEM ] install seq 0 aru 0 high seq received 0
Jan 28 22:58:32 corosync [TOTEM ] token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 2, aru 0
Jan 28 22:58:32 corosync [TOTEM ] install seq 0 aru 0 high seq received 0
Jan 28 22:58:32 corosync [TOTEM ] token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 3, aru 0
Jan 28 22:58:32 corosync [TOTEM ] install seq 0 aru 0 high seq received 0
Jan 28 22:58:32 corosync [TOTEM ] retrans flag count 4 token aru 0 install seq 0 aru 0 0
Jan 28 22:58:32 corosync [TOTEM ] recovery to regular 1-0
Jan 28 22:58:32 corosync [TOTEM ] Delivering to app 11 to 10
Jan 28 22:58:32 corosync [CLM   ] CLM CONFIGURATION CHANGE
Jan 28 22:58:32 corosync [CLM   ] New Configuration:
Jan 28 22:58:32 corosync [CLM   ]     r(0) ip(192.168.67.11) r(1) ip(192.168.2.11) 
Jan 28 22:58:32 corosync [CLM   ] Members Left:
Jan 28 22:58:32 corosync [CLM   ] Members Joined:
Jan 28 22:58:32 corosync [EVT   ] Evt conf change 1
Jan 28 22:58:32 corosync [EVT   ] m 1, j 0 l 0
Jan 28 22:58:32 corosync [LCK   ] [DEBUG]: lck_confchg_fn
Jan 28 22:58:32 corosync [MSG   ] [DEBUG]: msg_confchg_fn
Jan 28 22:58:32 corosync [pcmk  ] notice: pcmk_peer_update: Transitional membership event on ring 32: memb=1, new=0, lost=0
Jan 28 22:58:32 corosync [pcmk  ] info: pcmk_peer_update: memb: node01.houseofdraper.org 188983488
Jan 28 22:58:32 corosync [CLM   ] CLM CONFIGURATION CHANGE
Jan 28 22:58:32 corosync [CLM   ] New Configuration:
Jan 28 22:58:32 corosync [CLM   ]     r(0) ip(192.168.67.11) r(1) ip(192.168.2.11) 
Jan 28 22:58:32 corosync [CLM   ]     r(0) ip(192.168.67.12) r(1) ip(192.168.2.12) 
Jan 28 22:58:32 corosync [CLM   ] Members Left:
Jan 28 22:58:32 corosync [CLM   ] Members Joined:
Jan 28 22:58:32 corosync [CLM   ]     r(0) ip(192.168.67.12) r(1) ip(192.168.2.12) 
Jan 28 22:58:32 corosync [EVT   ] Evt conf change 0
Jan 28 22:58:32 corosync [EVT   ] m 2, j 1 l 0
Jan 28 22:58:32 corosync [LCK   ] [DEBUG]: lck_confchg_fn
Jan 28 22:58:32 corosync [MSG   ] [DEBUG]: msg_confchg_fn
Jan 28 22:58:32 corosync [pcmk  ] notice: pcmk_peer_update: Stable membership event on ring 32: memb=2, new=1, lost=0
Jan 28 22:58:32 corosync [pcmk  ] info: update_member: Creating entry for node 205760704 born on 32
Jan 28 22:58:32 corosync [pcmk  ] info: update_member: Node 205760704/unknown is now: member
Jan 28 22:58:32 corosync [pcmk  ] info: pcmk_peer_update: NEW:  .pending. 205760704
Jan 28 22:58:32 corosync [pcmk  ] debug: pcmk_peer_update: Node 205760704 has address r(0) ip(192.168.67.12) r(1) ip(192.168.2.12) 
Jan 28 22:58:32 corosync [pcmk  ] info: pcmk_peer_update: MEMB: node01.houseofdraper.org 188983488
Jan 28 22:58:32 corosync [pcmk  ] info: pcmk_peer_update: MEMB: .pending. 205760704
Jan 28 22:58:32 corosync [pcmk  ] debug: pcmk_peer_update: 1 nodes changed
Jan 28 22:58:32 corosync [pcmk  ] info: send_member_notification: Sending membership update 32 to 2 children
Jan 28 22:58:32 corosync [pcmk  ] debug: send_cluster_id: Local update: id=188983488, born=32, seq=32
Jan 28 22:58:32 corosync [pcmk  ] info: update_member: 0x86cf0 Node 188983488 ((null)) born on: 32
Jan 28 22:58:32 corosync [SYNC  ] This node is within the primary component and will provide service.
Jan 28 22:58:32 corosync [TOTEM ] entering OPERATIONAL state.
Jan 28 22:58:32 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.
Jan 28 22:58:32 corosync [TOTEM ] Received ringid(192.168.67.11:32) seq 1
Jan 28 22:58:32 corosync [TOTEM ] Delivering 0 to 1
Jan 28 22:58:32 corosync [TOTEM ] Delivering MCAST message with seq 1 to pending delivery queue
Jan 28 22:58:32 corosync [pcmk  ] debug: pcmk_cluster_id_callback: Node update: node02.houseofdraper.org (1.0.7)
Jan 28 22:58:32 corosync [pcmk  ] info: update_member: 0x8bb90 Node 205760704 (node02.houseofdraper.org) born on: 32
Jan 28 22:58:32 corosync [pcmk  ] info: update_member: 0x8bb90 Node 205760704 now known as node02.houseofdraper.org (was: (null))
Jan 28 22:58:32 corosync [pcmk  ] info: update_member: Node node02.houseofdraper.org now has process list: 00000000000000000000000000053312 (340754)
Jan 28 22:58:32 corosync [pcmk  ] info: update_member: Node node02.houseofdraper.org now has 1 quorum votes (was 0)
Jan 28 22:58:32 corosync [pcmk  ] info: send_member_notification: Sending membership update 32 to 2 children
Jan 28 22:58:32 corosync [SYNC  ] confchg entries 2
Jan 28 22:58:32 corosync [SYNC  ] Barrier Start Received From 205760704
Jan 28 22:58:32 corosync [SYNC  ] Barrier completion status for nodeid 188983488 = 0. 
Jan 28 22:58:32 corosync [SYNC  ] Barrier completion status for nodeid 205760704 = 1. 
Jan 28 22:58:32 corosync [TOTEM ] mcasted message added to pending queue
Jan 28 22:58:32 corosync [TOTEM ] Delivering 1 to 2
Jan 28 22:58:32 corosync [TOTEM ] Delivering MCAST message with seq 2 to pending delivery queue
Jan 28 22:58:32 corosync [pcmk  ] debug: pcmk_cluster_id_callback: Node update: node01.houseofdraper.org (1.0.7)
Jan 28 22:58:32 corosync [SYNC  ] confchg entries 2
Jan 28 22:58:32 corosync [SYNC  ] Barrier Start Received From 188983488
Jan 28 22:58:32 corosync [SYNC  ] Barrier completion status for nodeid 188983488 = 1. 
Jan 28 22:58:32 corosync [SYNC  ] Barrier completion status for nodeid 205760704 = 1. 
Jan 28 22:58:32 corosync [SYNC  ] Synchronization barrier completed
Jan 28 22:58:32 corosync [SYNC  ] Synchronization actions starting for (openais cluster membership service B.01.01)
Jan 28 22:58:32 corosync [TOTEM ] Received ringid(192.168.67.11:32) seq 2
Jan 28 22:58:32 corosync [TOTEM ] mcasted message added to pending queue
Jan 28 22:58:32 corosync [TOTEM ] releasing messages up to and including 1
Jan 28 22:58:32 corosync [TOTEM ] Delivering 2 to 3
Jan 28 22:58:32 corosync [TOTEM ] Delivering MCAST message with seq 3 to pending delivery queue
Jan 28 22:58:32 corosync [CLM   ] got nodejoin message 192.168.67.11
Jan 28 22:58:32 corosync [TOTEM ] Received ringid(192.168.67.11:32) seq 3
Jan 28 22:58:32 corosync [TOTEM ] Received ringid(192.168.67.11:32) seq 4
Jan 28 22:58:32 corosync [TOTEM ] Delivering 3 to 4
Jan 28 22:58:32 corosync [TOTEM ] Delivering MCAST message with seq 4 to pending delivery queue
Jan 28 22:58:32 corosync [CLM   ] got nodejoin message 192.168.67.12
Jan 28 22:58:32 corosync [TOTEM ] mcasted message added to pending queue
Jan 28 22:58:32 corosync [TOTEM ] releasing messages up to and including 2
Jan 28 22:58:32 corosync [TOTEM ] Delivering 4 to 5
Jan 28 22:58:32 corosync [TOTEM ] Delivering MCAST message with seq 5 to pending delivery queue
Jan 28 22:58:32 corosync [SYNC  ] confchg entries 2
Jan 28 22:58:32 corosync [SYNC  ] Barrier Start Received From 188983488
Jan 28 22:58:32 corosync [SYNC  ] Barrier completion status for nodeid 188983488 = 1. 
Jan 28 22:58:32 corosync [SYNC  ] Barrier completion status for nodeid 205760704 = 0. 
Jan 28 22:58:32 corosync [TOTEM ] Received ringid(192.168.67.11:32) seq 5
Jan 28 22:58:32 corosync [TOTEM ] Received ringid(192.168.67.11:32) seq 6
Jan 28 22:58:32 corosync [TOTEM ] Delivering 5 to 6
Jan 28 22:58:32 corosync [TOTEM ] Delivering MCAST message with seq 6 to pending delivery queue
Jan 28 22:58:32 corosync [SYNC  ] confchg entries 2
Jan 28 22:58:32 corosync [SYNC  ] Barrier Start Received From 205760704
Jan 28 22:58:32 corosync [SYNC  ] Barrier completion status for nodeid 188983488 = 1. 
Jan 28 22:58:32 corosync [SYNC  ] Barrier completion status for nodeid 205760704 = 1. 
Jan 28 22:58:32 corosync [SYNC  ] Synchronization barrier completed
Jan 28 22:58:32 corosync [SYNC  ] Committing synchronization for (openais cluster membership service B.01.01)
Jan 28 22:58:32 corosync [SYNC  ] Synchronization actions starting for (dummy AMF service)
Jan 28 22:58:32 corosync [TOTEM ] releasing messages up to and including 4
Jan 28 22:58:32 corosync [TOTEM ] releasing messages up to and including 6
Jan 28 22:58:32 corosync [TOTEM ] Received ringid(192.168.67.11:32) seq 7
Jan 28 22:58:32 corosync [TOTEM ] Delivering 6 to 7
Jan 28 22:58:32 corosync [TOTEM ] Delivering MCAST message with seq 7 to pending delivery queue
Jan 28 22:58:32 corosync [SYNC  ] confchg entries 2
Jan 28 22:58:32 corosync [SYNC  ] Barrier Start Received From 205760704
Jan 28 22:58:32 corosync [SYNC  ] Barrier completion status for nodeid 188983488 = 0. 
Jan 28 22:58:32 corosync [SYNC  ] Barrier completion status for nodeid 205760704 = 1. 
Jan 28 22:58:32 corosync [TOTEM ] mcasted message added to pending queue
Jan 28 22:58:32 corosync [TOTEM ] Delivering 7 to 8
Jan 28 22:58:32 corosync [TOTEM ] Delivering MCAST message with seq 8 to pending delivery queue
Jan 28 22:58:32 corosync [SYNC  ] confchg entries 2
Jan 28 22:58:32 corosync [SYNC  ] Barrier Start Received From 188983488
Jan 28 22:58:32 corosync [SYNC  ] Barrier completion status for nodeid 188983488 = 1. 
Jan 28 22:58:32 corosync [SYNC  ] Barrier completion status for nodeid 205760704 = 1. 
Jan 28 22:58:32 corosync [SYNC  ] Synchronization barrier completed
Jan 28 22:58:32 corosync [SYNC  ] Committing synchronization for (dummy AMF service)
Jan 28 22:58:32 corosync [SYNC  ] Synchronization actions starting for (openais checkpoint service B.01.01)
Jan 28 22:58:32 corosync [TOTEM ] Received ringid(192.168.67.11:32) seq 8
Jan 28 22:58:32 corosync [TOTEM ] mcasted message added to pending queue
Jan 28 22:58:32 corosync [TOTEM ] releasing messages up to and including 7
Jan 28 22:58:32 corosync [TOTEM ] Delivering 8 to 9
Jan 28 22:58:32 corosync [TOTEM ] Delivering MCAST message with seq 9 to pending delivery queue
Jan 28 22:58:32 corosync [TOTEM ] Received ringid(192.168.67.11:32) seq 9
Jan 28 22:58:32 corosync [TOTEM ] Received ringid(192.168.67.11:32) seq a
Jan 28 22:58:32 corosync [TOTEM ] Delivering 9 to a
Jan 28 22:58:32 corosync [TOTEM ] Delivering MCAST message with seq a to pending delivery queue
Jan 28 22:58:32 corosync [TOTEM ] releasing messages up to and including 8
Jan 28 22:58:32 corosync [TOTEM ] releasing messages up to and including a
Jan 28 22:58:32 corosync [TOTEM ] mcasted message added to pending queue
Jan 28 22:58:32 corosync [TOTEM ] Delivering a to b
Jan 28 22:58:32 corosync [TOTEM ] Delivering MCAST message with seq b to pending delivery queue
Jan 28 22:58:32 corosync [SYNC  ] confchg entries 2
Jan 28 22:58:32 corosync [SYNC  ] Barrier Start Received From 188983488
Jan 28 22:58:32 corosync [SYNC  ] Barrier completion status for nodeid 188983488 = 1. 
Jan 28 22:58:32 corosync [SYNC  ] Barrier completion status for nodeid 205760704 = 0. 
Jan 28 22:58:32 corosync [TOTEM ] Received ringid(192.168.67.11:32) seq b
Jan 28 22:58:32 corosync [TOTEM ] Received ringid(192.168.67.11:32) seq c
Jan 28 22:58:32 corosync [TOTEM ] Delivering b to c
Jan 28 22:58:32 corosync [TOTEM ] Delivering MCAST message with seq c to pending delivery queue
Jan 28 22:58:32 corosync [SYNC  ] confchg entries 2
Jan 28 22:58:32 corosync [SYNC  ] Barrier Start Received From 205760704
Jan 28 22:58:32 corosync [SYNC  ] Barrier completion status for nodeid 188983488 = 1. 
Jan 28 22:58:32 corosync [SYNC  ] Barrier completion status for nodeid 205760704 = 1. 
Jan 28 22:58:32 corosync [SYNC  ] Synchronization barrier completed
Jan 28 22:58:32 corosync [SYNC  ] Committing synchronization for (openais checkpoint service B.01.01)
Jan 28 22:58:32 corosync [SYNC  ] Synchronization actions starting for (openais event service B.01.01)
Jan 28 22:58:32 corosync [EVT   ] Evt synchronize initialization
Jan 28 22:58:32 corosync [EVT   ] Process Evt synchronization 
Jan 28 22:58:32 corosync [EVT   ] Send max event ID updates
Jan 28 22:58:32 corosync [EVT   ] Send set evt ID 0 to r(0) ip(192.168.67.11) r(1) ip(192.168.2.11) 
Jan 28 22:58:32 corosync [TOTEM ] Received ringid(192.168.67.11:32) seq d
Jan 28 22:58:32 corosync [TOTEM ] Delivering c to d
Jan 28 22:58:32 corosync [TOTEM ] Delivering MCAST message with seq d to pending delivery queue
Jan 28 22:58:32 corosync [EVT   ] Remote channel operation request
Jan 28 22:58:32 corosync [EVT   ] my node ID: 0xb43a8c0
Jan 28 22:58:32 corosync [EVT   ] Received Set event ID OP from nodeid c43a8c0 to 0 for c43a8c0 my addr r(0) ip(192.168.67.11) r(1) ip(192.168.2.11)  base 1
Jan 28 22:58:32 corosync [TOTEM ] mcasted message added to pending queue
Jan 28 22:58:32 corosync [TOTEM ] releasing messages up to and including c
Jan 28 22:58:32 corosync [TOTEM ] Delivering d to e
Jan 28 22:58:32 corosync [TOTEM ] Delivering MCAST message with seq e to pending delivery queue
Jan 28 22:58:32 corosync [EVT   ] Remote channel operation request
Jan 28 22:58:32 corosync [EVT   ] my node ID: 0xb43a8c0
Jan 28 22:58:32 corosync [EVT   ] Received Set event ID OP from nodeid b43a8c0 to 0 for b43a8c0 my addr r(0) ip(192.168.67.11) r(1) ip(192.168.2.11)  base 1
Jan 28 22:58:32 corosync [EVT   ] Process Evt synchronization 
Jan 28 22:58:32 corosync [EVT   ] Send open count updates
Jan 28 22:58:32 corosync [EVT   ] DONE Sending open counts
Jan 28 22:58:32 corosync [TOTEM ] Received ringid(192.168.67.11:32) seq e
Jan 28 22:58:32 corosync [TOTEM ] Received ringid(192.168.67.11:32) seq f
Jan 28 22:58:32 corosync [TOTEM ] Delivering e to f
Jan 28 22:58:32 corosync [TOTEM ] Delivering MCAST message with seq f to pending delivery queue
Jan 28 22:58:32 corosync [EVT   ] Remote channel operation request
Jan 28 22:58:32 corosync [EVT   ] my node ID: 0xb43a8c0
Jan 28 22:58:32 corosync [EVT   ] Receive EVT_CONF_CHANGE_DONE from nodeid r(0) ip(192.168.67.12) r(1) ip(192.168.2.12)  members 2 checked in 1
Jan 28 22:58:32 corosync [TOTEM ] mcasted message added to pending queue
Jan 28 22:58:32 corosync [TOTEM ] releasing messages up to and including d
Jan 28 22:58:32 corosync [TOTEM ] Delivering f to 10
Jan 28 22:58:32 corosync [TOTEM ] Delivering MCAST message with seq 10 to pending delivery queue
Jan 28 22:58:32 corosync [EVT   ] Remote channel operation request
Jan 28 22:58:32 corosync [EVT   ] my node ID: 0xb43a8c0
Jan 28 22:58:32 corosync [EVT   ] Receive EVT_CONF_CHANGE_DONE from nodeid r(0) ip(192.168.67.11) r(1) ip(192.168.2.11)  members 2 checked in 2
Jan 28 22:58:32 corosync [EVT   ] I am oldest in my transitional config
Jan 28 22:58:32 corosync [EVT   ] Process Evt synchronization 
Jan 28 22:58:32 corosync [EVT   ] Send retained event updates
Jan 28 22:58:32 corosync [TOTEM ] Received ringid(192.168.67.11:32) seq 10
Jan 28 22:58:32 corosync [TOTEM ] releasing messages up to and including f
Jan 28 22:58:32 corosync [EVT   ] Process Evt synchronization 
Jan 28 22:58:32 corosync [EVT   ] DONE Sending retained events
Jan 28 22:58:32 corosync [TOTEM ] mcasted message added to pending queue
Jan 28 22:58:32 corosync [TOTEM ] releasing messages up to and including 10
Jan 28 22:58:32 corosync [TOTEM ] Delivering 10 to 11
Jan 28 22:58:32 corosync [TOTEM ] Delivering MCAST message with seq 11 to pending delivery queue
Jan 28 22:58:32 corosync [EVT   ] Remote channel operation request
Jan 28 22:58:32 corosync [EVT   ] my node ID: 0xb43a8c0
Jan 28 22:58:32 corosync [EVT   ] Receive EVT_CONF_DONE from nodeid r(0) ip(192.168.67.11) r(1) ip(192.168.2.11) , members 2 checked in 1
Jan 28 22:58:32 corosync [EVT   ] Process Evt synchronization 
Jan 28 22:58:32 corosync [EVT   ] Wait for retained events
Jan 28 22:58:32 corosync [TOTEM ] Received ringid(192.168.67.11:32) seq 11
Jan 28 22:58:32 corosync [TOTEM ] Received ringid(192.168.67.11:32) seq 12
Jan 28 22:58:32 corosync [TOTEM ] Delivering 11 to 12
Jan 28 22:58:32 corosync [TOTEM ] Delivering MCAST message with seq 12 to pending delivery queue
Jan 28 22:58:32 corosync [EVT   ] Remote channel operation request
Jan 28 22:58:32 corosync [EVT   ] my node ID: 0xb43a8c0
Jan 28 22:58:32 corosync [EVT   ] Receive EVT_CONF_DONE from nodeid r(0) ip(192.168.67.12) r(1) ip(192.168.2.12) , members 2 checked in 2
Jan 28 22:58:32 corosync [EVT   ] Process Evt synchronization 
Jan 28 22:58:32 corosync [EVT   ] Recovery complete
Jan 28 22:58:32 corosync [TOTEM ] releasing messages up to and including 12
Jan 28 22:58:32 corosync [TOTEM ] Received ringid(192.168.67.11:32) seq 13
Jan 28 22:58:32 corosync [TOTEM ] Delivering 12 to 13
Jan 28 22:58:32 corosync [TOTEM ] Delivering MCAST message with seq 13 to pending delivery queue
Jan 28 22:58:32 corosync [SYNC  ] confchg entries 2
Jan 28 22:58:32 corosync [SYNC  ] Barrier Start Received From 205760704
Jan 28 22:58:32 corosync [SYNC  ] Barrier completion status for nodeid 188983488 = 0. 
Jan 28 22:58:32 corosync [SYNC  ] Barrier completion status for nodeid 205760704 = 1. 
Jan 28 22:58:32 corosync [TOTEM ] mcasted message added to pending queue
Jan 28 22:58:32 corosync [TOTEM ] Delivering 13 to 14
Jan 28 22:58:32 corosync [TOTEM ] Delivering MCAST message with seq 14 to pending delivery queue
Jan 28 22:58:32 corosync [SYNC  ] confchg entries 2
Jan 28 22:58:32 corosync [SYNC  ] Barrier Start Received From 188983488
Jan 28 22:58:32 corosync [SYNC  ] Barrier completion status for nodeid 188983488 = 1. 
Jan 28 22:58:32 corosync [SYNC  ] Barrier completion status for nodeid 205760704 = 1. 
Jan 28 22:58:32 corosync [SYNC  ] Synchronization barrier completed
Jan 28 22:58:32 corosync [EVT   ] Evt synchronize activation
Jan 28 22:58:32 corosync [SYNC  ] Committing synchronization for (openais event service B.01.01)
Jan 28 22:58:32 corosync [SYNC  ] Synchronization actions starting for (corosync cluster closed process group service v1.01)
Jan 28 22:58:32 corosync [TOTEM ] mcasted message added to pending queue
Jan 28 22:58:32 corosync [TOTEM ] Received ringid(192.168.67.11:32) seq 14
Jan 28 22:58:32 corosync [TOTEM ] mcasted message added to pending queue
Jan 28 22:58:32 corosync [TOTEM ] releasing messages up to and including 13
Jan 28 22:58:32 corosync [TOTEM ] Delivering 14 to 16
Jan 28 22:58:32 corosync [TOTEM ] Delivering MCAST message with seq 15 to pending delivery queue
Jan 28 22:58:32 corosync [TOTEM ] Delivering MCAST message with seq 16 to pending delivery queue
Jan 28 22:58:32 corosync [CPG   ] downlist left_list: 0
Jan 28 22:58:32 corosync [TOTEM ] Received ringid(192.168.67.11:32) seq 15
Jan 28 22:58:32 corosync [TOTEM ] Received ringid(192.168.67.11:32) seq 16
Jan 28 22:58:32 corosync [TOTEM ] Received ringid(192.168.67.11:32) seq 18
Jan 28 22:58:32 corosync [TOTEM ] Delivering 16 to 18
Jan 28 22:58:32 corosync [TOTEM ] Received ringid(192.168.67.11:32) seq 17
Jan 28 22:58:32 corosync [TOTEM ] Delivering 16 to 18
Jan 28 22:58:32 corosync [TOTEM ] Delivering MCAST message with seq 17 to pending delivery queue
Jan 28 22:58:32 corosync [TOTEM ] Delivering MCAST message with seq 18 to pending delivery queue
Jan 28 22:58:32 corosync [CPG   ] downlist left_list: 0
Jan 28 22:58:32 corosync [TOTEM ] mcasted message added to pending queue
Jan 28 22:58:32 corosync [TOTEM ] releasing messages up to and including 14
Jan 28 22:58:32 corosync [TOTEM ] Delivering 18 to 19
Jan 28 22:58:32 corosync [TOTEM ] Delivering MCAST message with seq 19 to pending delivery queue
Jan 28 22:58:32 corosync [SYNC  ] confchg entries 2
Jan 28 22:58:32 corosync [SYNC  ] Barrier Start Received From 188983488
Jan 28 22:58:32 corosync [SYNC  ] Barrier completion status for nodeid 188983488 = 1. 
Jan 28 22:58:32 corosync [SYNC  ] Barrier completion status for nodeid 205760704 = 0. 
Jan 28 22:58:32 corosync [TOTEM ] Received ringid(192.168.67.11:32) seq 19
Jan 28 22:58:32 corosync [TOTEM ] Received ringid(192.168.67.11:32) seq 1a
Jan 28 22:58:32 corosync [TOTEM ] Delivering 19 to 1a
Jan 28 22:58:32 corosync [TOTEM ] Delivering MCAST message with seq 1a to pending delivery queue
Jan 28 22:58:32 corosync [SYNC  ] confchg entries 2
Jan 28 22:58:32 corosync [SYNC  ] Barrier Start Received From 205760704
Jan 28 22:58:32 corosync [SYNC  ] Barrier completion status for nodeid 188983488 = 1. 
Jan 28 22:58:32 corosync [SYNC  ] Barrier completion status for nodeid 205760704 = 1. 
Jan 28 22:58:32 corosync [SYNC  ] Synchronization barrier completed
Jan 28 22:58:32 corosync [SYNC  ] Committing synchronization for (corosync cluster closed process group service v1.01)
Jan 28 22:58:32 corosync [TOTEM ] mcasted message added to pending queue
Jan 28 22:58:32 corosync [TOTEM ] releasing messages up to and including 18
Jan 28 22:58:32 corosync [TOTEM ] Delivering 1a to 1b
Jan 28 22:58:32 corosync [TOTEM ] Delivering MCAST message with seq 1b to pending delivery queue
Jan 28 22:58:32 corosync [TOTEM ] Received ringid(192.168.67.11:32) seq 1b
Jan 28 22:58:32 corosync [TOTEM ] Received ringid(192.168.67.11:32) seq 1c
Jan 28 22:58:32 corosync [TOTEM ] Delivering 1b to 1c
Jan 28 22:58:32 corosync [TOTEM ] Delivering MCAST message with seq 1c to pending delivery queue
Jan 28 22:58:32 corosync [MAIN  ] Completed service synchronization, ready to provide service.
Jan 28 22:58:32 corosync [TOTEM ] releasing messages up to and including 1a
Jan 28 22:58:32 corosync [TOTEM ] releasing messages up to and including 1c
Jan 28 22:58:33 corosync [TOTEM ] Received ringid(192.168.67.11:32) seq 1d
Jan 28 22:58:33 corosync [TOTEM ] Delivering 1c to 1d
Jan 28 22:58:33 corosync [TOTEM ] Delivering MCAST message with seq 1d to pending delivery queue
Jan 28 22:58:33 corosync [pcmk  ] debug: pcmk_cluster_id_callback: Node update: node02.houseofdraper.org (1.0.7)
Jan 28 22:58:33 corosync [pcmk  ] info: update_member: Node node02.houseofdraper.org now has process list: 00000000000000000000000000013312 (78610)
Jan 28 22:58:33 corosync [pcmk  ] info: send_member_notification: Sending membership update 32 to 2 children
Jan 28 22:58:33 corosync [TOTEM ] releasing messages up to and including 1d
Jan 28 22:58:33 corosync [TOTEM ] Received ringid(192.168.67.11:32) seq 1e
Jan 28 22:58:33 corosync [TOTEM ] Delivering 1d to 1e
Jan 28 22:58:33 corosync [TOTEM ] Delivering MCAST message with seq 1e to pending delivery queue
Jan 28 22:58:33 corosync [TOTEM ] releasing messages up to and including 1e
Jan 28 22:58:34 corosync [TOTEM ] Received ringid(192.168.67.11:32) seq 1f
Jan 28 22:58:34 corosync [TOTEM ] Delivering 1e to 1f
Jan 28 22:58:34 corosync [TOTEM ] Delivering MCAST message with seq 1f to pending delivery queue
Jan 28 22:58:34 corosync [TOTEM ] releasing messages up to and including 1f
Jan 28 22:59:19 corosync [TOTEM ] mcasted message added to pending queue
Jan 28 22:59:19 corosync [TOTEM ] Delivering 1f to 20
Jan 28 22:59:19 corosync [TOTEM ] Delivering MCAST message with seq 20 to pending delivery queue
Jan 28 22:59:19 corosync [TOTEM ] Received ringid(192.168.67.11:32) seq 20
Jan 28 22:59:19 corosync [TOTEM ] releasing messages up to and including 20
Jan 28 22:59:34 corosync [TOTEM ] Received ringid(192.168.67.11:32) seq 21
Jan 28 22:59:34 corosync [TOTEM ] Delivering 20 to 21
Jan 28 22:59:34 corosync [TOTEM ] Delivering MCAST message with seq 21 to pending delivery queue
Jan 28 22:59:34 corosync [TOTEM ] releasing messages up to and including 21
Jan 28 22:59:46 corosync [SERV  ] Unloading all Corosync service engines.
Jan 28 22:59:46 corosync [pcmk  ] notice: pcmk_shutdown: Shuting down Pacemaker
Jan 28 22:59:46 corosync [pcmk  ] notice: pcmk_shutdown: mgmtd confirmed stopped
Jan 28 22:59:46 corosync [pcmk  ] debug: stop_child: Stopping CRM child "crmd"
Jan 28 22:59:46 corosync [pcmk  ] notice: stop_child: Sent -15 to crmd: [19336]
Jan 28 22:59:46 corosync [TOTEM ] mcasted message added to pending queue
Jan 28 22:59:46 corosync [TOTEM ] Delivering 21 to 22
Jan 28 22:59:46 corosync [TOTEM ] Delivering MCAST message with seq 22 to pending delivery queue
Jan 28 22:59:46 corosync [TOTEM ] Received ringid(192.168.67.11:32) seq 22
Jan 28 22:59:46 corosync [TOTEM ] mcasted message added to pending queue
Jan 28 22:59:46 corosync [TOTEM ] Delivering 22 to 23
Jan 28 22:59:46 corosync [TOTEM ] Delivering MCAST message with seq 23 to pending delivery queue
Jan 28 22:59:46 corosync [TOTEM ] Received ringid(192.168.67.11:32) seq 23
Jan 28 22:59:46 corosync [TOTEM ] releasing messages up to and including 22
Jan 28 22:59:46 corosync [TOTEM ] releasing messages up to and including 23
----------------------------------------

And finally the relevant sections from node01's /var/log/messages
(Sorry, more longness...)

----------------------------------------
-bash-4.0# tail -100 /var/log/messages
Jan 28 22:58:19 node01 crmd: [19336]: ERROR: check_message_sanity: Invalid message 0: (dest=<all>:unknown, from=<all>:unknown.0, compressed=0, size=0, total=0)
Jan 28 22:58:19 node01 crmd: [19336]: ERROR: ais_dispatch: Invalid message (id=0, dest=<all>:unknown, from=<all>:unknown.0): min=592, total=0, size=0, bz2_size=0
Jan 28 22:58:21 node01 attrd: [19334]: info: cib_connect: Connected to the CIB after 1 signon attempts
Jan 28 22:58:21 node01 attrd: [19334]: info: cib_connect: Sending full refresh
Jan 28 22:58:32 node01 corosync[19324]:   [CLM   ] CLM CONFIGURATION CHANGE
Jan 28 22:58:32 node01 corosync[19324]:   [CLM   ] New Configuration:
Jan 28 22:58:32 node01 corosync[19324]:   [CLM   ] #011r(0) ip(192.168.67.11) r(1) ip(192.168.2.11) 
Jan 28 22:58:32 node01 corosync[19324]:   [CLM   ] Members Left:
Jan 28 22:58:32 node01 corosync[19324]:   [CLM   ] Members Joined:
Jan 28 22:58:32 node01 crmd: [19336]: WARN: check_message_sanity: Message with no size
Jan 28 22:58:32 node01 cib: [19332]: WARN: check_message_sanity: Message payload size is incorrect: expected 128, got 4092288
Jan 28 22:58:32 node01 crmd: [19336]: ERROR: check_message_sanity: Invalid message 0: (dest=<all>:unknown, from=<all>:unknown.0, compressed=0, size=0, total=0)
Jan 28 22:58:32 node01 cib: [19332]: WARN: check_message_sanity: Message payload is corrupted: expected 128 bytes, got 384
Jan 28 22:58:32 node01 crmd: [19336]: ERROR: ais_dispatch: Invalid message (id=0, dest=<all>:unknown, from=<all>:unknown.0): min=592, total=0, size=0, bz2_size=0
Jan 28 22:58:32 node01 cib: [19332]: ERROR: check_message_sanity: Invalid message 5: (dest=<all>:unknown, from=node01.houseofdraper.org:ais.188983296, compressed=0, size=128, total=4092880)
Jan 28 22:58:32 node01 corosync[19324]:   [pcmk  ] notice: pcmk_peer_update: Transitional membership event on ring 32: memb=1, new=0, lost=0
Jan 28 22:58:32 node01 crmd: [19336]: WARN: check_message_sanity: Message with no size
Jan 28 22:58:32 node01 cib: [19332]: ERROR: ais_dispatch: Invalid message (id=5, dest=<all>:unknown, from=node01.houseofdraper.org:ais.188983296): min=592, total=4092880, size=128, bz2_size=256
Jan 28 22:58:32 node01 corosync[19324]:   [pcmk  ] info: pcmk_peer_update: memb: node01.houseofdraper.org 188983488
Jan 28 22:58:32 node01 crmd: [19336]: ERROR: check_message_sanity: Invalid message 0: (dest=<all>:unknown, from=<all>:unknown.0, compressed=0, size=0, total=0)
Jan 28 22:58:32 node01 cib: [19332]: WARN: check_message_sanity: Message with no size
Jan 28 22:58:32 node01 corosync[19324]:   [CLM   ] CLM CONFIGURATION CHANGE
Jan 28 22:58:32 node01 crmd: [19336]: ERROR: ais_dispatch: Invalid message (id=0, dest=<all>:unknown, from=<all>:unknown.0): min=592, total=0, size=0, bz2_size=0
Jan 28 22:58:32 node01 cib: [19332]: ERROR: check_message_sanity: Invalid message 0: (dest=<all>:unknown, from=<all>:unknown.0, compressed=0, size=0, total=0)
Jan 28 22:58:32 node01 corosync[19324]:   [CLM   ] New Configuration:
Jan 28 22:58:32 node01 cib: [19332]: ERROR: ais_dispatch: Invalid message (id=0, dest=<all>:unknown, from=<all>:unknown.0): min=592, total=0, size=0, bz2_size=0
Jan 28 22:58:32 node01 corosync[19324]:   [CLM   ] #011r(0) ip(192.168.67.11) r(1) ip(192.168.2.11) 
Jan 28 22:58:32 node01 corosync[19324]:   [CLM   ] #011r(0) ip(192.168.67.12) r(1) ip(192.168.2.12) 
Jan 28 22:58:32 node01 corosync[19324]:   [CLM   ] Members Left:
Jan 28 22:58:32 node01 corosync[19324]:   [CLM   ] Members Joined:
Jan 28 22:58:32 node01 corosync[19324]:   [CLM   ] #011r(0) ip(192.168.67.12) r(1) ip(192.168.2.12) 
Jan 28 22:58:32 node01 corosync[19324]:   [pcmk  ] notice: pcmk_peer_update: Stable membership event on ring 32: memb=2, new=1, lost=0
Jan 28 22:58:32 node01 corosync[19324]:   [pcmk  ] info: update_member: Creating entry for node 205760704 born on 32
Jan 28 22:58:32 node01 corosync[19324]:   [pcmk  ] info: update_member: Node 205760704/unknown is now: member
Jan 28 22:58:32 node01 corosync[19324]:   [pcmk  ] info: pcmk_peer_update: NEW:  .pending. 205760704
Jan 28 22:58:32 node01 corosync[19324]:   [pcmk  ] info: pcmk_peer_update: MEMB: node01.houseofdraper.org 188983488
Jan 28 22:58:32 node01 corosync[19324]:   [pcmk  ] info: pcmk_peer_update: MEMB: .pending. 205760704
Jan 28 22:58:32 node01 corosync[19324]:   [pcmk  ] info: send_member_notification: Sending membership update 32 to 2 children
Jan 28 22:58:32 node01 corosync[19324]:   [pcmk  ] info: update_member: 0x86cf0 Node 188983488 ((null)) born on: 32
Jan 28 22:58:32 node01 corosync[19324]:   [TOTEM ] A processor joined or left the membership and a new membership was formed.
Jan 28 22:58:32 node01 corosync[19324]:   [pcmk  ] info: update_member: 0x8bb90 Node 205760704 (node02.houseofdraper.org) born on: 32
Jan 28 22:58:32 node01 corosync[19324]:   [pcmk  ] info: update_member: 0x8bb90 Node 205760704 now known as node02.houseofdraper.org (was: (null))
Jan 28 22:58:32 node01 corosync[19324]:   [pcmk  ] info: update_member: Node node02.houseofdraper.org now has process list: 00000000000000000000000000053312 (340754)
Jan 28 22:58:32 node01 corosync[19324]:   [pcmk  ] info: update_member: Node node02.houseofdraper.org now has 1 quorum votes (was 0)
Jan 28 22:58:32 node01 corosync[19324]:   [pcmk  ] info: send_member_notification: Sending membership update 32 to 2 children
Jan 28 22:58:32 node01 corosync[19324]:   [MAIN  ] Completed service synchronization, ready to provide service.
Jan 28 22:58:33 node01 corosync[19324]:   [pcmk  ] info: update_member: Node node02.houseofdraper.org now has process list: 00000000000000000000000000013312 (78610)
Jan 28 22:58:33 node01 corosync[19324]:   [pcmk  ] info: send_member_notification: Sending membership update 32 to 2 children
Jan 28 22:58:33 node01 crmd: [19336]: WARN: check_message_sanity: Message with no size
Jan 28 22:58:33 node01 crmd: [19336]: ERROR: check_message_sanity: Invalid message 0: (dest=<all>:unknown, from=<all>:unknown.0, compressed=0, size=0, total=0)
Jan 28 22:58:33 node01 crmd: [19336]: ERROR: ais_dispatch: Invalid message (id=0, dest=<all>:unknown, from=<all>:unknown.0): min=592, total=0, size=0, bz2_size=0
Jan 28 22:58:33 node01 cib: [19332]: WARN: check_message_sanity: Message with no size
Jan 28 22:58:33 node01 cib: [19332]: ERROR: check_message_sanity: Invalid message 0: (dest=<all>:unknown, from=<all>:unknown.0, compressed=0, size=0, total=0)
Jan 28 22:58:33 node01 cib: [19332]: ERROR: ais_dispatch: Invalid message (id=0, dest=<all>:unknown, from=<all>:unknown.0): min=592, total=0, size=0, bz2_size=0
Jan 28 22:58:33 node01 crmd: [19336]: WARN: check_message_sanity: Message with no size
Jan 28 22:58:33 node01 crmd: [19336]: ERROR: check_message_sanity: Invalid message 0: (dest=<all>:unknown, from=<all>:unknown.0, compressed=0, size=0, total=0)
Jan 28 22:58:33 node01 crmd: [19336]: ERROR: ais_dispatch: Invalid message (id=0, dest=<all>:unknown, from=<all>:unknown.0): min=592, total=0, size=0, bz2_size=0
Jan 28 22:58:34 node01 crmd: [19336]: WARN: check_message_sanity: Message with no size
Jan 28 22:58:34 node01 crmd: [19336]: ERROR: check_message_sanity: Invalid message 0: (dest=<all>:unknown, from=<all>:unknown.0, compressed=0, size=0, total=0)
Jan 28 22:58:34 node01 crmd: [19336]: ERROR: ais_dispatch: Invalid message (id=0, dest=<all>:unknown, from=<all>:unknown.0): min=592, total=0, size=0, bz2_size=0
Jan 28 22:59:19 node01 crmd: [19336]: info: crm_timer_popped: Election Trigger (I_DC_TIMEOUT) just popped!
Jan 28 22:59:19 node01 crmd: [19336]: WARN: do_log: FSA: Input I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING
Jan 28 22:59:19 node01 crmd: [19336]: info: do_state_transition: State transition S_PENDING -> S_ELECTION [ input=I_DC_TIMEOUT cause=C_TIMER_POPPED origin=crm_timer_popped ]
Jan 28 22:59:19 node01 crmd: [19336]: WARN: check_message_sanity: Message with no size
Jan 28 22:59:19 node01 crmd: [19336]: ERROR: check_message_sanity: Invalid message 0: (dest=<all>:unknown, from=<all>:unknown.0, compressed=0, size=0, total=0)
Jan 28 22:59:19 node01 crmd: [19336]: ERROR: ais_dispatch: Invalid message (id=0, dest=<all>:unknown, from=<all>:unknown.0): min=592, total=0, size=0, bz2_size=0
Jan 28 22:59:34 node01 crmd: [19336]: WARN: check_message_sanity: Message with no size
Jan 28 22:59:34 node01 crmd: [19336]: ERROR: check_message_sanity: Invalid message 0: (dest=<all>:unknown, from=<all>:unknown.0, compressed=0, size=0, total=0)
Jan 28 22:59:34 node01 crmd: [19336]: ERROR: ais_dispatch: Invalid message (id=0, dest=<all>:unknown, from=<all>:unknown.0): min=592, total=0, size=0, bz2_size=0
Jan 28 22:59:46 node01 corosync[19324]:   [SERV  ] Unloading all Corosync service engines.
Jan 28 22:59:46 node01 corosync[19324]:   [pcmk  ] notice: pcmk_shutdown: Shuting down Pacemaker
Jan 28 22:59:46 node01 corosync[19324]:   [pcmk  ] notice: pcmk_shutdown: mgmtd confirmed stopped
Jan 28 22:59:46 node01 corosync[19324]:   [pcmk  ] notice: stop_child: Sent -15 to crmd: [19336]
Jan 28 22:59:46 node01 crmd: [19336]: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
Jan 28 22:59:46 node01 crmd: [19336]: info: crm_shutdown: Requesting shutdown
Jan 28 22:59:46 node01 crmd: [19336]: info: do_shutdown_req: Sending shutdown request to DC: <null>
Jan 28 22:59:46 node01 crmd: [19336]: WARN: check_message_sanity: Message with no size
Jan 28 22:59:46 node01 crmd: [19336]: ERROR: check_message_sanity: Invalid message 0: (dest=<all>:unknown, from=<all>:unknown.0, compressed=0, size=0, total=0)
Jan 28 22:59:46 node01 crmd: [19336]: ERROR: ais_dispatch: Invalid message (id=0, dest=<all>:unknown, from=<all>:unknown.0): min=592, total=0, size=0, bz2_size=0
Jan 28 22:59:46 node01 crmd: [19336]: WARN: check_message_sanity: Message with no size
Jan 28 22:59:46 node01 crmd: [19336]: ERROR: check_message_sanity: Invalid message 0: (dest=<all>:unknown, from=<all>:unknown.0, compressed=0, size=0, total=0)
Jan 28 22:59:46 node01 crmd: [19336]: ERROR: ais_dispatch: Invalid message (id=0, dest=<all>:unknown, from=<all>:unknown.0): min=592, total=0, size=0, bz2_size=0
Jan 28 22:59:50 node01 attrd: [19334]: ERROR: ais_dispatch: Receiving message body failed: (2) Library error: Resource temporarily unavailable (11)
Jan 28 22:59:50 node01 attrd: [19334]: ERROR: ais_dispatch: AIS connection failed
Jan 28 22:59:50 node01 attrd: [19334]: CRIT: attrd_ais_destroy: Lost connection to OpenAIS service!
Jan 28 22:59:50 node01 attrd: [19334]: info: main: Exiting...
Jan 28 22:59:50 node01 attrd: [19334]: ERROR: attrd_cib_connection_destroy: Connection to the CIB terminated...
Jan 28 22:59:50 node01 cib: [19332]: ERROR: ais_dispatch: Receiving message body failed: (2) Library error: Resource temporarily unavailable (11)
Jan 28 22:59:50 node01 cib: [19332]: ERROR: ais_dispatch: AIS connection failed
Jan 28 22:59:50 node01 cib: [19332]: ERROR: cib_ais_destroy: AIS connection terminated
Jan 28 22:59:50 node01 crmd: [19336]: info: cib_native_msgready: Lost connection to the CIB service [19332].
Jan 28 22:59:50 node01 crmd: [19336]: CRIT: cib_native_dispatch: Lost connection to the CIB service [19332/callback].
Jan 28 22:59:50 node01 crmd: [19336]: CRIT: cib_native_dispatch: Lost connection to the CIB service [19332/command].
Jan 28 22:59:50 node01 crmd: [19336]: ERROR: crmd_cib_connection_destroy: Connection to the CIB terminated...
Jan 28 22:59:50 node01 crmd: [19336]: ERROR: ais_dispatch: Receiving message body failed: (2) Library error: Invalid argument (22)
Jan 28 22:59:50 node01 crmd: [19336]: ERROR: ais_dispatch: AIS connection failed
Jan 28 22:59:50 node01 crmd: [19336]: ERROR: crm_ais_destroy: AIS connection terminated
Jan 28 22:59:50 node01 stonithd: [19331]: ERROR: ais_dispatch: Receiving message body failed: (2) Library error: Success (0)
Jan 28 22:59:50 node01 stonithd: [19331]: ERROR: ais_dispatch: AIS connection failed
Jan 28 22:59:50 node01 stonithd: [19331]: ERROR: AIS connection terminated
----------------------------------------

It's funny. Most people get tripped up configuring resources and services. I can't even get the cluster to form. Go figure. Anyway, I once heard it said that everything's easy once you know what you're doing. Anyone care to steer me in the right direction?

Thanks in advance, and have a great day!

DJ
 		 	   		  
_________________________________________________________________
Your E-mail and More On-the-Go. Get Windows Live Hotmail Free.
http://clk.atdmt.com/GBL/go/196390709/direct/01/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.clusterlabs.org/pipermail/pacemaker/attachments/20100128/73358b40/attachment.html>


More information about the Pacemaker mailing list