[Pacemaker] fail-count is not updated

Kazunori INOUE inouekazu at intellilink.co.jp
Mon Apr 2 01:40:20 EDT 2012


Hi, Andrew

When combined with pacemaker-1.1.7 and corosync-1.99.9,
fail-count is not updated at the time of monitor failure.

I am using the newest devel.
- pacemaker : 7172b7323bb72c51999ce11c6fa5d3ff0a0a4b4f
- corosync  : 4b2cfc3f6beabe517b28ea31c5340bf3b0a6b455
- glue      : 041b464f74c8
- libqb     : 7b13d09afbb684f9ee59def23b155b38a21987df

# crm_mon -f1
============
Last updated: Mon Apr  2 14:03:03 2012
Last change: Mon Apr  2 14:02:33 2012 via cibadmin on vm1
Stack: corosync
Current DC: vm1 (224766144) - partition with quorum
Version: 1.1.7-7172b73
2 Nodes configured, unknown expected votes
1 Resources configured.
============

Online: [ vm1 vm2 ]

 prmDummy1      (ocf::pacemaker:Dummy): Started vm1

Migration summary:
* Node vm1:
* Node vm2:

Failed actions:
    prmDummy1_monitor_10000 (node=vm1, call=4, rc=7, status=complete): not running
#

I think this is because corosync's nodeID and hostname are intermingled in the
value which identifies a cluster node.
I added the debugging code. (l.769)

# vi pacemaker/tools/attrd.c
<snip>
752 void
753 attrd_local_callback(xmlNode * msg)
754 {
<snip>
768
769 crm_info("DEBUG: [%s,%s,%s,%s,%s],[%s]\n",from,op,attr,value,host,attrd_uname);
770     if (host != NULL && safe_str_neq(host, attrd_uname)) {
771         send_cluster_message(host, crm_msg_attrd, msg, FALSE);
772         return;
773     }
774
775     crm_debug("%s message from %s: %s=%s", op, from, attr, crm_str(value));

[root at vm1 ~]# grep DEBUG /var/log/ha-debug
<snip>
Apr  2 14:02:34 vm1 Dummy(prmDummy1)[21140]: DEBUG: prmDummy1 monitor : 7
Apr  2 14:02:34 vm1 attrd[21077]:     info: attrd_local_callback: DEBUG: [crmd,update,probe_complete,true,(null)],[vm1]
Apr  2 14:02:34 vm1 Dummy(prmDummy1)[21151]: DEBUG: prmDummy1 start : 0
Apr  2 14:02:34 vm1 Dummy(prmDummy1)[21159]: DEBUG: prmDummy1 monitor : 0
Apr  2 14:02:44 vm1 Dummy(prmDummy1)[21166]: DEBUG: prmDummy1 monitor : 0
Apr  2 14:02:54 vm1 Dummy(prmDummy1)[21175]: DEBUG: prmDummy1 monitor : 7
Apr  2 14:02:54 vm1 attrd[21077]:     info: attrd_local_callback: DEBUG: [crmd,update,fail-count-prmDummy1,value++,224766144],[vm1]
Apr  2 14:02:54 vm1 attrd[21077]:     info: attrd_local_callback: DEBUG: [crmd,update,last-failure-prmDummy1,1333342974,224766144],[vm1]
Apr  2 14:02:54 vm1 Dummy(prmDummy1)[21182]: DEBUG: prmDummy1 stop : 0
Apr  2 14:02:54 vm1 Dummy(prmDummy1)[21189]: DEBUG: prmDummy1 start : 0
Apr  2 14:02:54 vm1 Dummy(prmDummy1)[21201]: DEBUG: prmDummy1 monitor : 0

Corosync's nodeID was stored in variable 'host', and hostname was stored in
variable 'attrd_uname'..

[root at vm1 ~]# corosync-cfgtool -s | grep node
Local node ID 224766144
[root at vm1 ~]#

Regards,
Kazunori INOUE
-------------- next part --------------
# Please read the corosync.conf.5 manual page
compatibility: whitetank

aisexec {
	user: root
	group: root
}

service {
	# Load the Pacemaker Cluster Resource Manager
	name: pacemaker
	ver:  1
	use_logd: no
}

totem {
	version: 2
	secauth: off
	threads: 0
	rrp_mode: active
	token: 4000
	rrp_problem_count_timeout: 40000
	clear_node_high_bit: yes
	interface {
		ringnumber: 0
		bindnetaddr: 192.168.101.0
		mcastaddr: 226.94.1.1
		mcastport: 5436
	}
	interface {
		ringnumber: 1
		bindnetaddr: 192.168.102.0
		mcastaddr: 226.94.1.1
		mcastport: 5436
	}
logging {
	fileline: on
	to_syslog: yes
	syslog_facility: local1
	syslog_priority: info
	debug: on
	timestamp: on
}

quorum {
	provider: corosync_votequorum
	expected_votes: 1
	votes: 1
}
-------------- next part --------------
property no-quorum-policy="ignore" \
	stonith-enabled="false" \
	startup-fencing="false"
rsc_defaults resource-stickiness="INFINITY" \
	migration-threshold="1"

primitive prmDummy1 ocf:pacemaker:Dummy \
	op start timeout="60s" on-fail="restart" \
	op monitor interval="10s" timeout="60s" on-fail="restart" \
	op stop timeout="60s" on-fail="block"
location rsc_location-prmDummy1-1 prmDummy1 \
	rule 200: #uname eq vm1
-------------- next part --------------
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:837 Token Timeout (4000 ms) retransmit timeout (952 ms)
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:840 token hold (751 ms) retransmits before loss (4 retrans)
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:847 join (50 ms) send_join (0 ms) consensus (4800 ms) merge (200 ms)
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:850 downcheck (1000 ms) fail to recv const (2500 msgs)
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:852 seqno unchanged const (30 rotations) Maximum network MTU 1401
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:856 window size per rotation (50 messages) maximum messages per rotation (17 messages)
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:860 missed count const (5 messages)
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:863 send threads (0 threads)
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:866 RRP token expired timeout (952 ms)
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:869 RRP token problem counter (40000 ms)
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:872 RRP threshold (10 problem count)
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:875 RRP multicast threshold (100 problem count)
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:878 RRP automatic recovery check timeout (1000 ms)
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:880 RRP mode set to active.
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:883 heartbeat_failures_allowed (0)
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:885 max_network_delay (50 ms)
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:908 HeartBeat is Disabled. To enable set heartbeat_failures_allowed > 0
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemnet.c:241 Initializing transport (UDP/IP Multicast).
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemcrypto.c:526 Initializing transmit/receive security (NSS) crypto: none hash: none
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemnet.c:241 Initializing transport (UDP/IP Multicast).
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemcrypto.c:526 Initializing transmit/receive security (NSS) crypto: none hash: none
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemudp.c:792 Receive multicast socket recv buffer size (262142 bytes).
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemudp.c:798 Transmit multicast socket send buffer size (262142 bytes).
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemudp.c:602 The network interface [192.168.101.141] is now up.
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:4502 Created or loaded sequence id 0.192.168.101.141 for this ring.
Apr  2 14:01:56 vm1 corosync[21054]:   [SERV  ] service.c:177 Service engine loaded: corosync configuration map access [0]
Apr  2 14:01:56 vm1 corosync[21054]:   [QB    ] ipc_glue.c:810 Initializing IPC on cmap [0]
Apr  2 14:01:56 vm1 corosync[21054]:   [QB    ] ipc_us.c:511 server name: cmap
Apr  2 14:01:56 vm1 corosync[21054]:   [SERV  ] service.c:177 Service engine loaded: corosync configuration service [1]
Apr  2 14:01:56 vm1 corosync[21054]:   [QB    ] ipc_glue.c:810 Initializing IPC on cfg [1]
Apr  2 14:01:56 vm1 corosync[21054]:   [QB    ] ipc_us.c:511 server name: cfg
Apr  2 14:01:56 vm1 corosync[21054]:   [SERV  ] service.c:177 Service engine loaded: corosync cluster closed process group service v1.01 [2]
Apr  2 14:01:56 vm1 corosync[21054]:   [QB    ] ipc_glue.c:810 Initializing IPC on cpg [2]
Apr  2 14:01:56 vm1 corosync[21054]:   [QB    ] ipc_us.c:511 server name: cpg
Apr  2 14:01:56 vm1 corosync[21054]:   [SERV  ] service.c:177 Service engine loaded: corosync profile loading service [4]
Apr  2 14:01:56 vm1 corosync[21054]:   [QB    ] ipc_glue.c:802 NOT Initializing IPC on pload [4]
Apr  2 14:01:56 vm1 corosync[21054]:   [QUORUM] vsf_quorum.c:277 Using quorum provider corosync_votequorum
Apr  2 14:01:56 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:897 Reading configuration (runtime: 0)
Apr  2 14:01:56 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:819 No nodelist defined or our node is not in the nodelist
Apr  2 14:01:56 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:784 total_votes=1, expected_votes=1
Apr  2 14:01:56 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:609 node 224766144 state=1, votes=1, expected=1
Apr  2 14:01:56 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:519 lowest node id: 224766144 us: 224766144
Apr  2 14:01:56 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:712 quorum regained, resuming activity
Apr  2 14:01:56 vm1 corosync[21054]:   [QUORUM] vsf_quorum.c:148 This node is within the primary component and will provide service.
Apr  2 14:01:56 vm1 corosync[21054]:   [QUORUM] vsf_quorum.c:132 Members[0]:
Apr  2 14:01:56 vm1 corosync[21054]:   [QUORUM] vsf_quorum.c:362 sending quorum notification to (nil), length = 48
Apr  2 14:01:56 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:554 flags: quorate: Yes Leaving: No WFA Status: No First: Yes Qdevice: No QdeviceState: No
Apr  2 14:01:56 vm1 corosync[21054]:   [SERV  ] service.c:177 Service engine loaded: corosync vote quorum service v1.0 [5]
Apr  2 14:01:56 vm1 corosync[21054]:   [QB    ] ipc_glue.c:810 Initializing IPC on votequorum [5]
Apr  2 14:01:56 vm1 corosync[21054]:   [QB    ] ipc_us.c:511 server name: votequorum
Apr  2 14:01:56 vm1 corosync[21054]:   [SERV  ] service.c:177 Service engine loaded: corosync cluster quorum service v0.1 [3]
Apr  2 14:01:56 vm1 corosync[21054]:   [QB    ] ipc_glue.c:810 Initializing IPC on quorum [3]
Apr  2 14:01:56 vm1 corosync[21054]:   [QB    ] ipc_us.c:511 server name: quorum
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemudp.c:792 Receive multicast socket recv buffer size (262142 bytes).
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemudp.c:798 Transmit multicast socket send buffer size (262142 bytes).
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemudp.c:602 The network interface [192.168.102.141] is now up.
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:1977 entering GATHER state from 15.
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3018 Creating commit token because I am the rep.
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:1471 Saving state aru 0 high seq received 0
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3263 Storing new sequence id for ring 4
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2033 entering COMMIT state.
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:4349 got commit token
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2070 entering RECOVERY state.
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2116 position [0] member 192.168.101.141:
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2120 previous ring seq 0 rep 192.168.101.141
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2126 aru 0 high delivered 0 received flag 1
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2224 Did not need to originate any messages in recovery.
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:4349 got commit token
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:4402 Sending initial ORF token
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:4349 got commit token
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:4402 Sending initial ORF token
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:4349 got commit token
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:4402 Sending initial ORF token
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:4349 got commit token
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:4402 Sending initial ORF token
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:4349 got commit token
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:4402 Sending initial ORF token
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3664 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 0, aru 0
Apr  2 14:01:56 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3675 install seq 0 aru 0 high seq received 0
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemrrp.c:1361 Incrementing problem counter for seqid 1 iface 192.168.102.141 to [1 of 10]
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3664 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 1, aru 0
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3675 install seq 0 aru 0 high seq received 0
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3664 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 2, aru 0
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3675 install seq 0 aru 0 high seq received 0
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3664 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 3, aru 0
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3675 install seq 0 aru 0 high seq received 0
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3694 retrans flag count 4 token aru 0 install seq 0 aru 0 0
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:1487 Resetting old ring state
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:1693 recovery to regular 1-0
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:1779 Delivering to app 1 to 0
Apr  2 14:01:57 vm1 corosync[21054]:   [MAIN  ] main.c:299 Member joined: r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) 
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:554 flags: quorate: Yes Leaving: No WFA Status: No First: Yes Qdevice: No QdeviceState: No
Apr  2 14:01:57 vm1 corosync[21054]:   [QUORUM] vsf_quorum.c:132 Members[1]: 224766144
Apr  2 14:01:57 vm1 corosync[21054]:   [QUORUM] vsf_quorum.c:362 sending quorum notification to (nil), length = 52
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:1905 entering OPERATIONAL state.
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:1907 A processor joined or left the membership and a new membership was formed.
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 0 to 2
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 1 to pending delivery queue
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:1512 got nodeinfo message from cluster node 224766144
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:1517 nodeinfo message[224766144]: votes: 1, expected: 1 flags: 9
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:554 flags: quorate: Yes Leaving: No WFA Status: No First: Yes Qdevice: No QdeviceState: No
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:784 total_votes=1, expected_votes=1
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:609 node 224766144 state=1, votes=1, expected=1
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:519 lowest node id: 224766144 us: 224766144
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 2 to pending delivery queue
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:1512 got nodeinfo message from cluster node 224766144
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:1517 nodeinfo message[224766144]: votes: 1, expected: 1 flags: 9
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:554 flags: quorate: Yes Leaving: No WFA Status: No First: Yes Qdevice: No QdeviceState: No
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:784 total_votes=1, expected_votes=1
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:609 node 224766144 state=1, votes=1, expected=1
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:519 lowest node id: 224766144 us: 224766144
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:1512 got nodeinfo message from cluster node 224766144
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:1517 nodeinfo message[0]: votes: 0, expected: 0 flags: 0
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:784 total_votes=1, expected_votes=1
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:609 node 224766144 state=1, votes=1, expected=1
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:519 lowest node id: 224766144 us: 224766144
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 2 to 4
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 3 to pending delivery queue
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 4 to pending delivery queue
Apr  2 14:01:57 vm1 corosync[21054]:   [CPG   ] cpg.c:717 comparing: sender r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) ; members(old:0 left:0)
Apr  2 14:01:57 vm1 corosync[21054]:   [CPG   ] cpg.c:717 chosen downlist: sender r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) ; members(old:0 left:0)
Apr  2 14:01:57 vm1 corosync[21054]:   [SYNC  ] sync.c:250 Committing synchronization for corosync cluster closed process group service v1.01
Apr  2 14:01:57 vm1 corosync[21054]:   [MAIN  ] main.c:251 Completed service synchronization, ready to provide service.
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 2
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 4
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:1977 entering GATHER state from 11.
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3018 Creating commit token because I am the rep.
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:1471 Saving state aru 4 high seq received 4
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3263 Storing new sequence id for ring 8
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2033 entering COMMIT state.
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:4349 got commit token
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:4349 got commit token
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2070 entering RECOVERY state.
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2112 TRANS [0] member 192.168.101.141:
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2116 position [0] member 192.168.101.141:
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2120 previous ring seq 4 rep 192.168.101.141
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2126 aru 4 high delivered 4 received flag 1
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2116 position [1] member 192.168.101.142:
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2120 previous ring seq 4 rep 192.168.101.142
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2126 aru 4 high delivered 4 received flag 1
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2224 Did not need to originate any messages in recovery.
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:4349 got commit token
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:4402 Sending initial ORF token
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:4349 got commit token
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:4402 Sending initial ORF token
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:4349 got commit token
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:4402 Sending initial ORF token
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:4349 got commit token
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:4402 Sending initial ORF token
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3664 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 0, aru 0
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3675 install seq 0 aru 0 high seq received 0
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3664 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 1, aru 0
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3675 install seq 0 aru 0 high seq received 0
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3664 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 2, aru 0
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3675 install seq 0 aru 0 high seq received 0
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3664 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 3, aru 0
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3675 install seq 0 aru 0 high seq received 0
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3694 retrans flag count 4 token aru 0 install seq 0 aru 0 0
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:1487 Resetting old ring state
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:1693 recovery to regular 1-0
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:1779 Delivering to app 5 to 4
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:554 flags: quorate: Yes Leaving: No WFA Status: No First: Yes Qdevice: No QdeviceState: No
Apr  2 14:01:57 vm1 corosync[21054]:   [MAIN  ] main.c:299 Member joined: r(0) ip(192.168.101.142) r(1) ip(192.168.102.142) 
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:554 flags: quorate: Yes Leaving: No WFA Status: No First: No Qdevice: No QdeviceState: No
Apr  2 14:01:57 vm1 corosync[21054]:   [QUORUM] vsf_quorum.c:132 Members[2]: 224766144 241543360
Apr  2 14:01:57 vm1 corosync[21054]:   [QUORUM] vsf_quorum.c:362 sending quorum notification to (nil), length = 56
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:1905 entering OPERATIONAL state.
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:1907 A processor joined or left the membership and a new membership was formed.
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 1
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 0 to 1
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 1 to pending delivery queue
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:1512 got nodeinfo message from cluster node 241543360
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:1517 nodeinfo message[241543360]: votes: 1, expected: 1 flags: 9
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:554 flags: quorate: Yes Leaving: No WFA Status: No First: Yes Qdevice: No QdeviceState: No
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:784 total_votes=2, expected_votes=1
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:1334 Sending expected votes callback
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:609 node 224766144 state=1, votes=1, expected=2
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:609 node 241543360 state=1, votes=1, expected=1
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:519 lowest node id: 224766144 us: 224766144
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:1512 got nodeinfo message from cluster node 241543360
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:1517 nodeinfo message[0]: votes: 0, expected: 0 flags: 0
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:784 total_votes=2, expected_votes=2
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:609 node 224766144 state=1, votes=1, expected=2
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:609 node 241543360 state=1, votes=1, expected=1
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:519 lowest node id: 224766144 us: 224766144
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:1512 got nodeinfo message from cluster node 241543360
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:1517 nodeinfo message[241543360]: votes: 1, expected: 1 flags: 1
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:554 flags: quorate: Yes Leaving: No WFA Status: No First: No Qdevice: No QdeviceState: No
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:784 total_votes=2, expected_votes=2
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:609 node 224766144 state=1, votes=1, expected=2
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:609 node 241543360 state=1, votes=1, expected=1
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:519 lowest node id: 224766144 us: 224766144
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:1512 got nodeinfo message from cluster node 241543360
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:1517 nodeinfo message[0]: votes: 0, expected: 0 flags: 0
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:784 total_votes=2, expected_votes=2
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:609 node 224766144 state=1, votes=1, expected=2
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:609 node 241543360 state=1, votes=1, expected=1
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:519 lowest node id: 224766144 us: 224766144
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 1
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 1
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 1
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 1 to 2
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 2 to pending delivery queue
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:1512 got nodeinfo message from cluster node 224766144
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:1517 nodeinfo message[224766144]: votes: 1, expected: 1 flags: 9
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:554 flags: quorate: Yes Leaving: No WFA Status: No First: Yes Qdevice: No QdeviceState: No
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:784 total_votes=2, expected_votes=1
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:1334 Sending expected votes callback
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:609 node 224766144 state=1, votes=1, expected=2
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:609 node 241543360 state=1, votes=1, expected=1
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:519 lowest node id: 224766144 us: 224766144
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:1512 got nodeinfo message from cluster node 224766144
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:1517 nodeinfo message[0]: votes: 0, expected: 0 flags: 0
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:784 total_votes=2, expected_votes=2
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:609 node 224766144 state=1, votes=1, expected=2
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:609 node 241543360 state=1, votes=1, expected=1
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:519 lowest node id: 224766144 us: 224766144
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:1512 got nodeinfo message from cluster node 224766144
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:1517 nodeinfo message[224766144]: votes: 1, expected: 1 flags: 1
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:554 flags: quorate: Yes Leaving: No WFA Status: No First: No Qdevice: No QdeviceState: No
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:784 total_votes=2, expected_votes=1
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:1334 Sending expected votes callback
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:609 node 224766144 state=1, votes=1, expected=2
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:609 node 241543360 state=1, votes=1, expected=1
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:519 lowest node id: 224766144 us: 224766144
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:1512 got nodeinfo message from cluster node 224766144
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:1517 nodeinfo message[0]: votes: 0, expected: 0 flags: 0
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:784 total_votes=2, expected_votes=2
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:609 node 224766144 state=1, votes=1, expected=2
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:609 node 241543360 state=1, votes=1, expected=1
Apr  2 14:01:57 vm1 corosync[21054]:   [VOTEQ ] votequorum.c:519 lowest node id: 224766144 us: 224766144
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 2
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 2
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 2
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 2
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 1
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 2 to 4
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 3 to pending delivery queue
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 4 to pending delivery queue
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 3
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 3
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 3
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 3
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 4
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 4
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 4
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 4
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 5
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 4 to 5
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 5 to pending delivery queue
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 5
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 6
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 5 to 6
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 6 to pending delivery queue
Apr  2 14:01:57 vm1 corosync[21054]:   [CPG   ] cpg.c:717 comparing: sender r(0) ip(192.168.101.142) r(1) ip(192.168.102.142) ; members(old:1 left:0)
Apr  2 14:01:57 vm1 corosync[21054]:   [CPG   ] cpg.c:717 comparing: sender r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) ; members(old:1 left:0)
Apr  2 14:01:57 vm1 corosync[21054]:   [CPG   ] cpg.c:717 chosen downlist: sender r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) ; members(old:1 left:0)
Apr  2 14:01:57 vm1 corosync[21054]:   [SYNC  ] sync.c:250 Committing synchronization for corosync cluster closed process group service v1.01
Apr  2 14:01:57 vm1 corosync[21054]:   [MAIN  ] main.c:251 Completed service synchronization, ready to provide service.
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 5
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 6
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 6
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 5
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 6
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 2
Apr  2 14:01:57 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 6
Apr  2 14:01:59 vm1 corosync[21054]:   [QB    ] ipc_us.c:611 IPC credentials authenticated
Apr  2 14:01:59 vm1 corosync[21054]:   [QB    ] ipc_shm.c:243 connecting to client [21071]
Apr  2 14:01:59 vm1 corosync[21054]:   [QB    ] ringbuffer.c:190 shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:01:59 vm1 corosync[21054]:   [QB    ] ringbuffer.c:190 shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:01:59 vm1 corosync[21054]:   [QB    ] ringbuffer.c:190 shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:01:59 vm1 corosync[21054]:   [QB    ] ipc_glue.c:260 connection created
Apr  2 14:01:59 vm1 corosync[21054]:   [CMAP  ] cmap.c:185 lib_init_fn: conn=0x13cc2b0
Apr  2 14:01:59 vm1 corosync[21054]:   [QB    ] ipc_us.c:611 IPC credentials authenticated
Apr  2 14:01:59 vm1 corosync[21054]:   [QB    ] ipc_shm.c:243 connecting to client [21071]
Apr  2 14:01:59 vm1 corosync[21054]:   [QB    ] ringbuffer.c:190 shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:01:59 vm1 corosync[21054]:   [QB    ] ringbuffer.c:190 shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:01:59 vm1 corosync[21054]:   [QB    ] ringbuffer.c:190 shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:01:59 vm1 corosync[21054]:   [QB    ] ipc_glue.c:260 connection created
Apr  2 14:01:59 vm1 corosync[21054]:   [CMAP  ] cmap.c:185 lib_init_fn: conn=0x13d1eb0
Apr  2 14:01:59 vm1 corosync[21054]:   [QB    ] ipcs.c:674 HUP conn:0x13d1eb0 fd:20
Apr  2 14:01:59 vm1 corosync[21054]:   [QB    ] ipcs.c:509 qb_ipcs_disconnect() state:2
Apr  2 14:01:59 vm1 corosync[21054]:   [QB    ] ipc_glue.c:405 cs_ipcs_connection_closed() 
Apr  2 14:01:59 vm1 corosync[21054]:   [CMAP  ] cmap.c:204 exit_fn for conn=0x13d1eb0
Apr  2 14:01:59 vm1 corosync[21054]:   [QB    ] ipc_glue.c:378 cs_ipcs_connection_destroyed() 
Apr  2 14:01:59 vm1 corosync[21054]:   [QB    ] ringbuffer.c:248 Free'ing ringbuffer: /dev/shm/qb-cmap-response-21071-20-header
Apr  2 14:01:59 vm1 corosync[21054]:   [QB    ] ringbuffer.c:248 Free'ing ringbuffer: /dev/shm/qb-cmap-event-21071-20-header
Apr  2 14:01:59 vm1 corosync[21054]:   [QB    ] ringbuffer.c:248 Free'ing ringbuffer: /dev/shm/qb-cmap-request-21071-20-header
Apr  2 14:01:59 vm1 corosync[21054]:   [QB    ] ipcs.c:674 HUP conn:0x13cc2b0 fd:19
Apr  2 14:01:59 vm1 corosync[21054]:   [QB    ] ipcs.c:509 qb_ipcs_disconnect() state:2
Apr  2 14:01:59 vm1 corosync[21054]:   [QB    ] ipc_glue.c:405 cs_ipcs_connection_closed() 
Apr  2 14:01:59 vm1 corosync[21054]:   [CMAP  ] cmap.c:204 exit_fn for conn=0x13cc2b0
Apr  2 14:01:59 vm1 corosync[21054]:   [QB    ] ipc_glue.c:378 cs_ipcs_connection_destroyed() 
Apr  2 14:01:59 vm1 corosync[21054]:   [QB    ] ringbuffer.c:248 Free'ing ringbuffer: /dev/shm/qb-cmap-response-21071-19-header
Apr  2 14:01:59 vm1 corosync[21054]:   [QB    ] ringbuffer.c:248 Free'ing ringbuffer: /dev/shm/qb-cmap-event-21071-19-header
Apr  2 14:01:59 vm1 corosync[21054]:   [QB    ] ringbuffer.c:248 Free'ing ringbuffer: /dev/shm/qb-cmap-request-21071-19-header
Apr  2 14:01:59 vm1 pacemakerd[21072]:     info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/root
Apr  2 14:01:59 vm1 pacemakerd[21072]:   notice: main: Starting Pacemaker 1.1.7 (Build: 7172b73):  agent-manpages ncurses libqb-logging  heartbeat corosync-native
Apr  2 14:01:59 vm1 pacemakerd[21072]:     info: main: Maximum core file size is: 18446744073709551615
Apr  2 14:01:59 vm1 corosync[21054]:   [QB    ] ipc_us.c:611 IPC credentials authenticated
Apr  2 14:01:59 vm1 corosync[21054]:   [QB    ] ipc_shm.c:243 connecting to client [21072]
Apr  2 14:01:59 vm1 corosync[21054]:   [QB    ] ringbuffer.c:190 shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:01:59 vm1 corosync[21054]:   [QB    ] ringbuffer.c:190 shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:01:59 vm1 corosync[21054]:   [QB    ] ringbuffer.c:190 shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:01:59 vm1 corosync[21054]:   [QB    ] ipc_glue.c:260 connection created
Apr  2 14:01:59 vm1 pacemakerd[21072]:    debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:01:59 vm1 pacemakerd[21072]:    debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:01:59 vm1 pacemakerd[21072]:    debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:01:59 vm1 pacemakerd[21072]:    debug: cluster_connect_cfg: Our nodeid: 224766144
Apr  2 14:01:59 vm1 pacemakerd[21072]:    debug: cluster_connect_cfg: Adding fd=5 to mainloop
Apr  2 14:01:59 vm1 corosync[21054]:   [QB    ] ipc_us.c:611 IPC credentials authenticated
Apr  2 14:01:59 vm1 corosync[21054]:   [QB    ] ipc_shm.c:243 connecting to client [21072]
Apr  2 14:01:59 vm1 corosync[21054]:   [QB    ] ringbuffer.c:190 shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:01:59 vm1 corosync[21054]:   [QB    ] ringbuffer.c:190 shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:01:59 vm1 corosync[21054]:   [QB    ] ringbuffer.c:190 shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:01:59 vm1 corosync[21054]:   [QB    ] ipc_glue.c:260 connection created
Apr  2 14:01:59 vm1 corosync[21054]:   [CPG   ] cpg.c:1331 lib_init_fn: conn=0x13d1d20, cpd=0x13d2334
Apr  2 14:01:59 vm1 pacemakerd[21072]:    debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:01:59 vm1 pacemakerd[21072]:    debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:01:59 vm1 pacemakerd[21072]:    debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:01:59 vm1 pacemakerd[21072]:    debug: cluster_connect_cpg: Our nodeid: 224766144
Apr  2 14:01:59 vm1 pacemakerd[21072]:    debug: cluster_connect_cpg: Adding fd=6 to mainloop
Apr  2 14:01:59 vm1 pacemakerd[21072]:   notice: update_node_processes: 0x1df96e0 Node 224766144 now known as vm1, was: 
Apr  2 14:01:59 vm1 pacemakerd[21072]:    debug: update_node_processes: Node vm1 now has process list: 00000000000000000000000000000002 (was 00000000000000000000000000000000)
Apr  2 14:01:59 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x13d1d20
Apr  2 14:01:59 vm1 pacemakerd[21072]:     info: start_child: Forked child 21074 for process cib
Apr  2 14:01:59 vm1 pacemakerd[21072]:    debug: update_node_processes: Node vm1 now has process list: 00000000000000000000000000000102 (was 00000000000000000000000000000002)
Apr  2 14:01:59 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x13d1d20
Apr  2 14:01:59 vm1 pacemakerd[21072]:     info: start_child: Forked child 21075 for process stonith-ng
Apr  2 14:01:59 vm1 pacemakerd[21072]:    debug: update_node_processes: Node vm1 now has process list: 00000000000000000000000000100102 (was 00000000000000000000000000000102)
Apr  2 14:01:59 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x13d1d20
Apr  2 14:01:59 vm1 pacemakerd[21072]:     info: start_child: Forked child 21076 for process lrmd
Apr  2 14:01:59 vm1 pacemakerd[21072]:    debug: update_node_processes: Node vm1 now has process list: 00000000000000000000000000100112 (was 00000000000000000000000000100102)
Apr  2 14:01:59 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x13d1d20
Apr  2 14:01:59 vm1 pacemakerd[21072]:     info: start_child: Forked child 21077 for process attrd
Apr  2 14:01:59 vm1 pacemakerd[21072]:    debug: update_node_processes: Node vm1 now has process list: 00000000000000000000000000101112 (was 00000000000000000000000000100112)
Apr  2 14:01:59 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x13d1d20
Apr  2 14:01:59 vm1 pacemakerd[21072]:     info: start_child: Forked child 21078 for process pengine
Apr  2 14:01:59 vm1 pacemakerd[21072]:    debug: update_node_processes: Node vm1 now has process list: 00000000000000000000000000111112 (was 00000000000000000000000000101112)
Apr  2 14:01:59 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x13d1d20
Apr  2 14:01:59 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:01:59 vm1 pacemakerd[21072]:     info: start_child: Forked child 21079 for process crmd
Apr  2 14:01:59 vm1 pacemakerd[21072]:    debug: update_node_processes: Node vm1 now has process list: 00000000000000000000000000111312 (was 00000000000000000000000000111112)
Apr  2 14:01:59 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x13d1d20
Apr  2 14:01:59 vm1 pacemakerd[21072]:     info: main: Starting mainloop
Apr  2 14:01:59 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:01:59 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 6 to 8
Apr  2 14:01:59 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 7 to pending delivery queue
Apr  2 14:01:59 vm1 corosync[21054]:   [CPG   ] cpg.c:1133 got procjoin message from cluster node 224766144
Apr  2 14:01:59 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 8 to pending delivery queue
Apr  2 14:01:59 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 7
Apr  2 14:01:59 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 7
Apr  2 14:01:59 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 7
Apr  2 14:01:59 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 7
Apr  2 14:01:59 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 8
Apr  2 14:01:59 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 8
Apr  2 14:01:59 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 8
Apr  2 14:01:59 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 8
Apr  2 14:01:59 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x13d1d20
Apr  2 14:01:59 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:01:59 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 8 to 9
Apr  2 14:01:59 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 9 to pending delivery queue
Apr  2 14:01:59 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 9
Apr  2 14:01:59 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 9
Apr  2 14:01:59 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 9
Apr  2 14:01:59 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 9
Apr  2 14:01:59 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 8
Apr  2 14:01:59 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 9
Apr  2 14:01:59 vm1 lrmd: [21076]: info: enabling coredumps
Apr  2 14:01:59 vm1 lrmd: [21076]: debug: main: run the loop...
Apr  2 14:01:59 vm1 lrmd: [21076]: info: Started.
Apr  2 14:01:59 vm1 stonith-ng[21075]:     info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/root
Apr  2 14:01:59 vm1 stonith-ng[21075]:     info: get_cluster_type: Cluster type is: 'corosync'
Apr  2 14:01:59 vm1 stonith-ng[21075]:   notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Apr  2 14:01:59 vm1 cib[21074]:     info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/hacluster
Apr  2 14:01:59 vm1 cib[21074]:     info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
Apr  2 14:01:59 vm1 cib[21074]:  warning: retrieveCib: Cluster configuration not found: /var/lib/heartbeat/crm/cib.xml
Apr  2 14:01:59 vm1 cib[21074]:  warning: readCibXmlFile: Primary configuration corrupt or unusable, trying backup...
Apr  2 14:01:59 vm1 cib[21074]:    debug: get_last_sequence: Series file /var/lib/heartbeat/crm/cib.last does not exist
Apr  2 14:01:59 vm1 cib[21074]:    debug: readCibXmlFile: Backup file /var/lib/heartbeat/crm/cib-99.raw not found
Apr  2 14:01:59 vm1 cib[21074]:  warning: readCibXmlFile: Continuing with an empty configuration.
Apr  2 14:01:59 vm1 cib[21074]:     info: validate_with_relaxng: Creating RNG parser context
Apr  2 14:01:59 vm1 corosync[21054]:   [QB    ] ipc_us.c:611 IPC credentials authenticated
Apr  2 14:01:59 vm1 corosync[21054]:   [QB    ] ipc_shm.c:243 connecting to client [21075]
Apr  2 14:01:59 vm1 corosync[21054]:   [QB    ] ringbuffer.c:190 shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:01:59 vm1 attrd[21077]:     info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/hacluster
Apr  2 14:01:59 vm1 attrd[21077]:     info: main: Starting up
Apr  2 14:01:59 vm1 attrd[21077]:     info: get_cluster_type: Cluster type is: 'corosync'
Apr  2 14:01:59 vm1 attrd[21077]:   notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Apr  2 14:01:59 vm1 pengine[21078]:     info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/hacluster
Apr  2 14:01:59 vm1 pengine[21078]:    debug: main: Checking for old instances of pengine
Apr  2 14:01:59 vm1 pengine[21078]:    debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pengine
Apr  2 14:01:59 vm1 pengine[21078]:    debug: init_client_ipc_comms_nodispatch: Could not init comms on: /var/run/crm/pengine
Apr  2 14:01:59 vm1 pengine[21078]:    debug: main: Init server comms
Apr  2 14:01:59 vm1 crmd[21079]:     info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/hacluster
Apr  2 14:01:59 vm1 crmd[21079]:   notice: main: CRM Git Version: 7172b73
Apr  2 14:01:59 vm1 crmd[21079]:    debug: crmd_init: Starting crmd
Apr  2 14:01:59 vm1 crmd[21079]:    debug: s_crmd_fsa: Processing I_STARTUP: [ state=S_STARTING cause=C_STARTUP origin=crmd_init ]
Apr  2 14:01:59 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_LOG   
Apr  2 14:01:59 vm1 crmd[21079]:    debug: do_log: FSA: Input I_STARTUP from crmd_init() received in state S_STARTING
Apr  2 14:01:59 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_STARTUP
Apr  2 14:01:59 vm1 crmd[21079]:    debug: do_startup: Registering Signal Handlers
Apr  2 14:01:59 vm1 crmd[21079]:    debug: do_startup: Creating CIB and LRM objects
Apr  2 14:01:59 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_CIB_START
Apr  2 14:01:59 vm1 pengine[21078]:     info: main: Starting pengine
Apr  2 14:01:59 vm1 crmd[21079]:    debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Apr  2 14:01:59 vm1 crmd[21079]:    debug: init_client_ipc_comms_nodispatch: Could not init comms on: /var/run/crm/cib_rw
Apr  2 14:01:59 vm1 crmd[21079]:    debug: cib_native_signon_raw: Connection to command channel failed
Apr  2 14:01:59 vm1 crmd[21079]:    debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Apr  2 14:01:59 vm1 crmd[21079]:    debug: init_client_ipc_comms_nodispatch: Could not init comms on: /var/run/crm/cib_callback
Apr  2 14:01:59 vm1 crmd[21079]:    debug: cib_native_signon_raw: Connection to callback channel failed
Apr  2 14:01:59 vm1 crmd[21079]:    debug: cib_native_signon_raw: Connection to CIB failed: connection failed
Apr  2 14:01:59 vm1 crmd[21079]:    debug: cib_native_signoff: Signing out of the CIB Service
Apr  2 14:01:59 vm1 corosync[21054]:   [QB    ] ringbuffer.c:190 shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:02:00 vm1 corosync[21054]:   [QB    ] ringbuffer.c:190 shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:02:00 vm1 corosync[21054]:   [QB    ] ipc_glue.c:260 connection created
Apr  2 14:02:00 vm1 corosync[21054]:   [CPG   ] cpg.c:1331 lib_init_fn: conn=0x15d3ad0, cpd=0x16d4034
Apr  2 14:02:00 vm1 corosync[21054]:   [QB    ] ipc_us.c:611 IPC credentials authenticated
Apr  2 14:02:00 vm1 corosync[21054]:   [QB    ] ipc_shm.c:243 connecting to client [21077]
Apr  2 14:02:00 vm1 corosync[21054]:   [QB    ] ringbuffer.c:190 shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:02:00 vm1 stonith-ng[21075]:    debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:02:00 vm1 stonith-ng[21075]:    debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:02:00 vm1 stonith-ng[21075]:    debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:02:00 vm1 corosync[21054]:   [QB    ] ringbuffer.c:190 shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:02:00 vm1 corosync[21054]:   [QB    ] ringbuffer.c:190 shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:02:00 vm1 corosync[21054]:   [QB    ] ipc_glue.c:260 connection created
Apr  2 14:02:00 vm1 cib[21074]:    debug: activateCibXml: Triggering CIB write for start op
Apr  2 14:02:00 vm1 attrd[21077]:    debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:02:00 vm1 attrd[21077]:    debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:02:00 vm1 attrd[21077]:    debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:02:00 vm1 corosync[21054]:   [CPG   ] cpg.c:1331 lib_init_fn: conn=0x16d5350, cpd=0x17d5934
Apr  2 14:02:00 vm1 stonith-ng[21075]:    debug: init_cpg_connection: Adding fd=4 to mainloop
Apr  2 14:02:00 vm1 stonith-ng[21075]:     info: init_ais_connection_once: Connection to 'corosync': established
Apr  2 14:02:00 vm1 stonith-ng[21075]:    debug: crm_new_peer: Creating entry for node vm1/224766144
Apr  2 14:02:00 vm1 stonith-ng[21075]:     info: crm_new_peer: Node vm1 now has id: 224766144
Apr  2 14:02:00 vm1 stonith-ng[21075]:     info: crm_new_peer: Node 224766144 is now known as vm1
Apr  2 14:02:00 vm1 stonith-ng[21075]:    debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pcmk
Apr  2 14:02:00 vm1 stonith-ng[21075]:    debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Apr  2 14:02:00 vm1 stonith-ng[21075]:    debug: init_client_ipc_comms_nodispatch: Could not init comms on: /var/run/crm/cib_rw
Apr  2 14:02:00 vm1 stonith-ng[21075]:    debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Apr  2 14:02:00 vm1 stonith-ng[21075]:    debug: init_client_ipc_comms_nodispatch: Could not init comms on: /var/run/crm/cib_callback
Apr  2 14:02:00 vm1 pacemakerd[21072]:    debug: pcmk_client_connect: Channel 0x1df9b90 connected: 1 children
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq a
Apr  2 14:02:00 vm1 attrd[21077]:    debug: init_cpg_connection: Adding fd=4 to mainloop
Apr  2 14:02:00 vm1 attrd[21077]:     info: init_ais_connection_once: Connection to 'corosync': established
Apr  2 14:02:00 vm1 attrd[21077]:    debug: crm_new_peer: Creating entry for node vm1/224766144
Apr  2 14:02:00 vm1 attrd[21077]:     info: crm_new_peer: Node vm1 now has id: 224766144
Apr  2 14:02:00 vm1 attrd[21077]:     info: crm_new_peer: Node 224766144 is now known as vm1
Apr  2 14:02:00 vm1 attrd[21077]:    debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pcmk
Apr  2 14:02:00 vm1 pacemakerd[21072]:    debug: pcmk_client_connect: Channel 0x1dfb5d0 connected: 2 children
Apr  2 14:02:00 vm1 attrd[21077]:     info: main: Cluster connection active
Apr  2 14:02:00 vm1 attrd[21077]:     info: main: Accepting attribute updates
Apr  2 14:02:00 vm1 attrd[21077]:   notice: main: Starting mainloop...
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 9 to a
Apr  2 14:02:00 vm1 cib[21074]:     info: startCib: CIB Initialization completed successfully
Apr  2 14:02:00 vm1 attrd[21077]:    debug: crm_update_peer: Node vm1: id=224766144 seen=0 proc=00000000000000000000000000111312 (new)
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq a to pending delivery queue
Apr  2 14:02:00 vm1 corosync[21054]:   [CPG   ] cpg.c:1133 got procjoin message from cluster node 241543360
Apr  2 14:02:00 vm1 pacemakerd[21072]:   notice: update_node_processes: 0x1dfb980 Node 241543360 now known as vm2, was: 
Apr  2 14:02:00 vm1 pacemakerd[21072]:    debug: update_node_processes: Node vm2 now has process list: 00000000000000000000000000000002 (was 00000000000000000000000000000000)
Apr  2 14:02:00 vm1 pacemakerd[21072]:    debug: update_node_processes: Node vm2 now has process list: 00000000000000000000000000000102 (was 00000000000000000000000000000002)
Apr  2 14:02:00 vm1 pacemakerd[21072]:    debug: update_node_processes: Node vm2 now has process list: 00000000000000000000000000100102 (was 00000000000000000000000000000102)
Apr  2 14:02:00 vm1 pacemakerd[21072]:    debug: update_node_processes: Node vm2 now has process list: 00000000000000000000000000100112 (was 00000000000000000000000000100102)
Apr  2 14:02:00 vm1 pacemakerd[21072]:    debug: update_node_processes: Node vm2 now has process list: 00000000000000000000000000101112 (was 00000000000000000000000000100112)
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq b
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering a to b
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq b to pending delivery queue
Apr  2 14:02:00 vm1 pacemakerd[21072]:    debug: update_node_processes: Node vm2 now has process list: 00000000000000000000000000111112 (was 00000000000000000000000000101112)
Apr  2 14:02:00 vm1 pacemakerd[21072]:    debug: update_node_processes: Node vm2 now has process list: 00000000000000000000000000111312 (was 00000000000000000000000000111112)
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq a
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq b
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq a
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq b
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq a
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq b
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:00 vm1 cib[21074]:     info: get_cluster_type: Cluster type is: 'corosync'
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering b to c
Apr  2 14:02:00 vm1 cib[21074]:   notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq c to pending delivery queue
Apr  2 14:02:00 vm1 corosync[21054]:   [CPG   ] cpg.c:1133 got procjoin message from cluster node 224766144
Apr  2 14:02:00 vm1 corosync[21054]:   [CPG   ] cpg.c:1133 got procjoin message from cluster node 224766144
Apr  2 14:02:00 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x13d1d20
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq c
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq c
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq c
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq d
Apr  2 14:02:00 vm1 attrd[21077]:    debug: crm_new_peer: Creating entry for node vm2/241543360
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering c to d
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq d to pending delivery queue
Apr  2 14:02:00 vm1 attrd[21077]:     info: crm_new_peer: Node vm2 now has id: 241543360
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq d
Apr  2 14:02:00 vm1 attrd[21077]:     info: crm_new_peer: Node 241543360 is now known as vm2
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq c
Apr  2 14:02:00 vm1 attrd[21077]:    debug: crm_update_peer: Node vm2: id=241543360 seen=0 proc=00000000000000000000000000000102 (new)
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq d
Apr  2 14:02:00 vm1 attrd[21077]:    debug: crm_update_peer: Node vm2: id=241543360 seen=0 proc=00000000000000000000000000100102 (new)
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq d
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:00 vm1 attrd[21077]:    debug: crm_update_peer: Node vm2: id=241543360 seen=0 proc=00000000000000000000000000100112 (new)
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including b
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering d to e
Apr  2 14:02:00 vm1 attrd[21077]:    debug: crm_update_peer: Node vm2: id=241543360 seen=0 proc=00000000000000000000000000101112 (new)
Apr  2 14:02:00 vm1 attrd[21077]:    debug: crm_update_peer: Node vm2: id=241543360 seen=0 proc=00000000000000000000000000111112 (new)
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq e to pending delivery queue
Apr  2 14:02:00 vm1 corosync[21054]:   [QB    ] ipc_us.c:611 IPC credentials authenticated
Apr  2 14:02:00 vm1 attrd[21077]:    debug: crm_update_peer: Node vm2: id=241543360 seen=0 proc=00000000000000000000000000111312 (new)
Apr  2 14:02:00 vm1 attrd[21077]:    debug: pcmk_cpg_membership: Member[0] 224766144 
Apr  2 14:02:00 vm1 corosync[21054]:   [QB    ] ipc_shm.c:243 connecting to client [21074]
Apr  2 14:02:00 vm1 corosync[21054]:   [QB    ] ringbuffer.c:190 shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:02:00 vm1 corosync[21054]:   [QB    ] ringbuffer.c:190 shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:02:00 vm1 corosync[21054]:   [QB    ] ringbuffer.c:190 shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:02:00 vm1 corosync[21054]:   [QB    ] ipc_glue.c:260 connection created
Apr  2 14:02:00 vm1 cib[21074]:    debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:02:00 vm1 cib[21074]:    debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:02:00 vm1 cib[21074]:    debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:02:00 vm1 corosync[21054]:   [CPG   ] cpg.c:1331 lib_init_fn: conn=0x17d95c0, cpd=0x17d9b14
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq e
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq e
Apr  2 14:02:00 vm1 cib[21074]:    debug: init_cpg_connection: Adding fd=4 to mainloop
Apr  2 14:02:00 vm1 cib[21074]:     info: init_ais_connection_once: Connection to 'corosync': established
Apr  2 14:02:00 vm1 cib[21074]:    debug: crm_new_peer: Creating entry for node vm1/224766144
Apr  2 14:02:00 vm1 cib[21074]:     info: crm_new_peer: Node vm1 now has id: 224766144
Apr  2 14:02:00 vm1 cib[21074]:     info: crm_new_peer: Node 224766144 is now known as vm1
Apr  2 14:02:00 vm1 cib[21074]:    debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pcmk
Apr  2 14:02:00 vm1 pacemakerd[21072]:    debug: pcmk_client_connect: Channel 0x1e04bd0 connected: 3 children
Apr  2 14:02:00 vm1 cib[21074]:     info: cib_init: Starting cib mainloop
Apr  2 14:02:00 vm1 cib[21074]:    debug: crm_update_peer: Node vm1: id=224766144 seen=0 proc=00000000000000000000000000111312 (new)
Apr  2 14:02:00 vm1 cib[21074]:    debug: crm_new_peer: Creating entry for node vm2/241543360
Apr  2 14:02:00 vm1 cib[21074]:     info: crm_new_peer: Node vm2 now has id: 241543360
Apr  2 14:02:00 vm1 cib[21074]:     info: crm_new_peer: Node 241543360 is now known as vm2
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq e
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq e
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including d
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering e to f
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq f to pending delivery queue
Apr  2 14:02:00 vm1 corosync[21054]:   [CPG   ] cpg.c:1133 got procjoin message from cluster node 224766144
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq f
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq f
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq f
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq f
Apr  2 14:02:00 vm1 cib[21074]:    debug: pcmk_cpg_membership: Member[0] 224766144 
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 10
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering f to 10
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 10 to pending delivery queue
Apr  2 14:02:00 vm1 corosync[21054]:   [CPG   ] cpg.c:1133 got procjoin message from cluster node 241543360
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 10
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 10
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 10
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including e
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 11
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 10 to 11
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 11 to pending delivery queue
Apr  2 14:02:00 vm1 corosync[21054]:   [CPG   ] cpg.c:1133 got procjoin message from cluster node 241543360
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 11
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 11
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 11
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 10
Apr  2 14:02:00 vm1 attrd[21077]:    debug: pcmk_cpg_membership: Member[0] 224766144 
Apr  2 14:02:00 vm1 attrd[21077]:    debug: pcmk_cpg_membership: Member[1] 241543360 
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 11
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 12
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 11 to 12
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 12 to pending delivery queue
Apr  2 14:02:00 vm1 corosync[21054]:   [CPG   ] cpg.c:1133 got procjoin message from cluster node 241543360
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 12
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 12
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 12
Apr  2 14:02:00 vm1 cib[21074]:    debug: pcmk_cpg_membership: Member[0] 224766144 
Apr  2 14:02:00 vm1 cib[21074]:    debug: pcmk_cpg_membership: Member[1] 241543360 
Apr  2 14:02:00 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 12
Apr  2 14:02:00 vm1 cib[21074]:    debug: cib_common_callback_worker: Setting cib_diff_notify callbacks for 14643 (46f51b6b-a9f8-46a1-94d0-8b05140fab01): off
Apr  2 14:02:00 vm1 cib[21074]:    debug: cib_common_callback_worker: Setting cib_diff_notify callbacks for 14643 (46f51b6b-a9f8-46a1-94d0-8b05140fab01): on
Apr  2 14:02:00 vm1 crmd[21079]:    debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Apr  2 14:02:00 vm1 crmd[21079]:    debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Apr  2 14:02:00 vm1 crmd[21079]:    debug: cib_native_signon_raw: Connection to CIB successful
Apr  2 14:02:00 vm1 crmd[21079]:     info: do_cib_control: CIB connection established
Apr  2 14:02:00 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_HA_CONNECT
Apr  2 14:02:00 vm1 crmd[21079]:     info: get_cluster_type: Cluster type is: 'corosync'
Apr  2 14:02:00 vm1 crmd[21079]:   notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Apr  2 14:02:00 vm1 cib[21074]:    debug: cib_common_callback_worker: Setting cib_refresh_notify callbacks for 21079 (d6180752-f4eb-4379-9a46-69e23c08ddc6): on
Apr  2 14:02:00 vm1 cib[21074]:    debug: cib_common_callback_worker: Setting cib_diff_notify callbacks for 21079 (d6180752-f4eb-4379-9a46-69e23c08ddc6): on
Apr  2 14:02:01 vm1 corosync[21054]:   [QB    ] ipc_us.c:611 IPC credentials authenticated
Apr  2 14:02:01 vm1 corosync[21054]:   [QB    ] ipc_shm.c:243 connecting to client [21079]
Apr  2 14:02:01 vm1 corosync[21054]:   [QB    ] ringbuffer.c:190 shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:02:01 vm1 corosync[21054]:   [QB    ] ringbuffer.c:190 shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:02:01 vm1 corosync[21054]:   [QB    ] ringbuffer.c:190 shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:02:01 vm1 corosync[21054]:   [QB    ] ipc_glue.c:260 connection created
Apr  2 14:02:01 vm1 crmd[21079]:    debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:02:01 vm1 crmd[21079]:    debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:02:01 vm1 crmd[21079]:    debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:02:01 vm1 corosync[21054]:   [CPG   ] cpg.c:1331 lib_init_fn: conn=0x17dd8f0, cpd=0x17ddf04
Apr  2 14:02:01 vm1 crmd[21079]:    debug: init_cpg_connection: Adding fd=6 to mainloop
Apr  2 14:02:01 vm1 crmd[21079]:     info: init_ais_connection_once: Connection to 'corosync': established
Apr  2 14:02:01 vm1 crmd[21079]:    debug: crm_new_peer: Creating entry for node vm1/224766144
Apr  2 14:02:01 vm1 crmd[21079]:     info: crm_new_peer: Node vm1 now has id: 224766144
Apr  2 14:02:01 vm1 crmd[21079]:     info: crm_new_peer: Node 224766144 is now known as vm1
Apr  2 14:02:01 vm1 crmd[21079]:     info: ais_status_callback: status: vm1 is now unknown
Apr  2 14:02:01 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:01 vm1 crmd[21079]:    debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pcmk
Apr  2 14:02:01 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 12 to 13
Apr  2 14:02:01 vm1 pacemakerd[21072]:    debug: pcmk_client_connect: Channel 0x1e0a680 connected: 4 children
Apr  2 14:02:01 vm1 crmd[21079]:    debug: init_quorum_connection: Configuring Pacemaker to obtain quorum from Corosync
Apr  2 14:02:01 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 13 to pending delivery queue
Apr  2 14:02:01 vm1 corosync[21054]:   [CPG   ] cpg.c:1133 got procjoin message from cluster node 224766144
Apr  2 14:02:01 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 13
Apr  2 14:02:01 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 13
Apr  2 14:02:01 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 13
Apr  2 14:02:01 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 13
Apr  2 14:02:01 vm1 corosync[21054]:   [QB    ] ipc_us.c:611 IPC credentials authenticated
Apr  2 14:02:01 vm1 corosync[21054]:   [QB    ] ipc_shm.c:243 connecting to client [21079]
Apr  2 14:02:01 vm1 corosync[21054]:   [QB    ] ringbuffer.c:190 shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:02:01 vm1 corosync[21054]:   [QB    ] ringbuffer.c:190 shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:02:01 vm1 corosync[21054]:   [QB    ] ringbuffer.c:190 shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:02:01 vm1 corosync[21054]:   [QB    ] ipc_glue.c:260 connection created
Apr  2 14:02:01 vm1 crmd[21079]:    debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:02:01 vm1 crmd[21079]:    debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:02:01 vm1 crmd[21079]:    debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
Apr  2 14:02:01 vm1 corosync[21054]:   [QUORUM] vsf_quorum.c:319 lib_init_fn: conn=0x17d7d20
Apr  2 14:02:01 vm1 corosync[21054]:   [QUORUM] vsf_quorum.c:474 got quorum_type request on 0x17d7d20
Apr  2 14:02:01 vm1 corosync[21054]:   [QUORUM] vsf_quorum.c:398 got quorate request on 0x17d7d20
Apr  2 14:02:01 vm1 crmd[21079]:   notice: init_quorum_connection: Quorum acquired
Apr  2 14:02:01 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 13
Apr  2 14:02:01 vm1 corosync[21054]:   [QUORUM] vsf_quorum.c:415 got trackstart request on 0x17d7d20
Apr  2 14:02:01 vm1 corosync[21054]:   [QUORUM] vsf_quorum.c:423 sending initial status to 0x17d7d20
Apr  2 14:02:01 vm1 corosync[21054]:   [QUORUM] vsf_quorum.c:362 sending quorum notification to 0x17d7d20, length = 56
Apr  2 14:02:01 vm1 crmd[21079]:     info: do_ha_control: Connected to the cluster
Apr  2 14:02:01 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_READCONFIG
Apr  2 14:02:01 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_LRM_CONNECT
Apr  2 14:02:01 vm1 crmd[21079]:    debug: do_lrm_control: Connecting to the LRM
Apr  2 14:02:01 vm1 lrmd: [21076]: debug: on_msg_register:client crmd [21079] registered
Apr  2 14:02:01 vm1 crmd[21079]:    debug: do_lrm_control: LRM connection established
Apr  2 14:02:01 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_CCM_CONNECT
Apr  2 14:02:01 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_STARTED
Apr  2 14:02:01 vm1 crmd[21079]:     info: do_started: Delaying start, no membership data (0000000000100000)
Apr  2 14:02:01 vm1 crmd[21079]:    debug: register_fsa_input_adv: Stalling the FSA pending further input: cause=C_FSA_INTERNAL
Apr  2 14:02:01 vm1 crmd[21079]:    debug: s_crmd_fsa: Exiting the FSA: queue=0, fsa_actions=0x2, stalled=true
Apr  2 14:02:01 vm1 crmd[21079]:    debug: config_query_callback: Call 3 : Parsing CIB options
Apr  2 14:02:01 vm1 crmd[21079]:    debug: config_query_callback: Shutdown escalation occurs after: 1200000ms
Apr  2 14:02:01 vm1 crmd[21079]:    debug: config_query_callback: Checking for expired actions every 900000ms
Apr  2 14:02:01 vm1 crmd[21079]:    debug: pcmk_cpg_membership: Member[0] 224766144 
Apr  2 14:02:01 vm1 stonith-ng[21075]:    debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Apr  2 14:02:01 vm1 stonith-ng[21075]:    debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Apr  2 14:02:01 vm1 stonith-ng[21075]:   notice: setup_cib: Watching for stonith topology changes
Apr  2 14:02:01 vm1 stonith-ng[21075]:     info: main: Starting stonith-ng mainloop
Apr  2 14:02:01 vm1 stonith-ng[21075]:    debug: pcmk_cpg_membership: Member[0] 224766144 
Apr  2 14:02:01 vm1 stonith-ng[21075]:    debug: pcmk_cpg_membership: Member[0] 224766144 
Apr  2 14:02:01 vm1 stonith-ng[21075]:    debug: pcmk_cpg_membership: Member[1] 241543360 
Apr  2 14:02:01 vm1 stonith-ng[21075]:    debug: crm_update_peer: Node vm1: id=224766144 seen=0 proc=00000000000000000000000000111312 (new)
Apr  2 14:02:01 vm1 stonith-ng[21075]:    debug: crm_new_peer: Creating entry for node vm2/241543360
Apr  2 14:02:01 vm1 stonith-ng[21075]:     info: crm_new_peer: Node vm2 now has id: 241543360
Apr  2 14:02:01 vm1 stonith-ng[21075]:     info: crm_new_peer: Node 241543360 is now known as vm2
Apr  2 14:02:01 vm1 stonith-ng[21075]:    debug: crm_update_peer: Node vm2: id=241543360 seen=0 proc=00000000000000000000000000000102 (new)
Apr  2 14:02:01 vm1 stonith-ng[21075]:    debug: crm_update_peer: Node vm2: id=241543360 seen=0 proc=00000000000000000000000000100102 (new)
Apr  2 14:02:01 vm1 cib[21074]:    debug: cib_common_callback_worker: Setting cib_diff_notify callbacks for 21075 (1f305439-05f8-49ca-9850-3d23dc56aa44): on
Apr  2 14:02:01 vm1 stonith-ng[21075]:    debug: crm_update_peer: Node vm2: id=241543360 seen=0 proc=00000000000000000000000000100112 (new)
Apr  2 14:02:01 vm1 stonith-ng[21075]:    debug: crm_update_peer: Node vm2: id=241543360 seen=0 proc=00000000000000000000000000101112 (new)
Apr  2 14:02:01 vm1 stonith-ng[21075]:    debug: crm_update_peer: Node vm2: id=241543360 seen=0 proc=00000000000000000000000000111112 (new)
Apr  2 14:02:01 vm1 stonith-ng[21075]:    debug: crm_update_peer: Node vm2: id=241543360 seen=0 proc=00000000000000000000000000111312 (new)
Apr  2 14:02:01 vm1 crmd[21079]:   notice: crmd_peer_update: Status update: Client vm1/crmd now has status [online] (DC=<null>)
Apr  2 14:02:01 vm1 crmd[21079]:    debug: crm_update_peer: Node vm1: id=224766144 seen=0 proc=00000000000000000000000000111312 (new)
Apr  2 14:02:01 vm1 crmd[21079]:    debug: crm_new_peer: Creating entry for node vm2/241543360
Apr  2 14:02:01 vm1 crmd[21079]:     info: crm_new_peer: Node vm2 now has id: 241543360
Apr  2 14:02:01 vm1 crmd[21079]:     info: crm_new_peer: Node 241543360 is now known as vm2
Apr  2 14:02:01 vm1 crmd[21079]:     info: ais_status_callback: status: vm2 is now unknown
Apr  2 14:02:01 vm1 crmd[21079]:     info: pcmk_quorum_notification: Membership 8: quorum retained (2)
Apr  2 14:02:01 vm1 crmd[21079]:    debug: pcmk_quorum_notification: Member[0] 224766144 
Apr  2 14:02:01 vm1 crmd[21079]:     info: ais_status_callback: status: vm1 is now member (was unknown)
Apr  2 14:02:01 vm1 crmd[21079]:     info: crm_update_peer: Node vm1: id=224766144 state=member (new) addr=(null) votes=0 born=0 seen=8 proc=00000000000000000000000000111312
Apr  2 14:02:01 vm1 crmd[21079]:    debug: pcmk_quorum_notification: Member[1] 241543360 
Apr  2 14:02:01 vm1 crmd[21079]:     info: ais_status_callback: status: vm2 is now member (was unknown)
Apr  2 14:02:01 vm1 crmd[21079]:     info: crm_update_peer: Node vm2: id=241543360 state=member (new) addr=(null) votes=0 born=0 seen=8 proc=00000000000000000000000000111312
Apr  2 14:02:01 vm1 crmd[21079]:    debug: post_cache_update: Updated cache after membership event 8.
Apr  2 14:02:01 vm1 crmd[21079]:    debug: post_cache_update: post_cache_update added action A_ELECTION_CHECK to the FSA
Apr  2 14:02:01 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_STARTED
Apr  2 14:02:01 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17dd8f0
Apr  2 14:02:01 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:01 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 13 to 14
Apr  2 14:02:01 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 14 to pending delivery queue
Apr  2 14:02:01 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 14
Apr  2 14:02:01 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 14
Apr  2 14:02:01 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 14
Apr  2 14:02:01 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 14
Apr  2 14:02:01 vm1 crmd[21079]:    debug: do_started: Init server comms
Apr  2 14:02:01 vm1 crmd[21079]:   notice: do_started: The local CRM is operational
Apr  2 14:02:01 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr  2 14:02:01 vm1 crmd[21079]:    debug: do_election_check: Ignore election check: we not in an election
Apr  2 14:02:01 vm1 crmd[21079]:    debug: s_crmd_fsa: Processing I_PENDING: [ state=S_STARTING cause=C_FSA_INTERNAL origin=do_started ]
Apr  2 14:02:01 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_LOG   
Apr  2 14:02:01 vm1 crmd[21079]:    debug: do_log: FSA: Input I_PENDING from do_started() received in state S_STARTING
Apr  2 14:02:01 vm1 crmd[21079]:   notice: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
Apr  2 14:02:01 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Apr  2 14:02:01 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 14
Apr  2 14:02:01 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr  2 14:02:01 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_QUERY
Apr  2 14:02:01 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 15
Apr  2 14:02:01 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 14 to 15
Apr  2 14:02:01 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 15 to pending delivery queue
Apr  2 14:02:01 vm1 corosync[21054]:   [CPG   ] cpg.c:1133 got procjoin message from cluster node 241543360
Apr  2 14:02:01 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 15
Apr  2 14:02:01 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 15
Apr  2 14:02:01 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 15
Apr  2 14:02:01 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 15
Apr  2 14:02:01 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 16
Apr  2 14:02:01 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 15 to 16
Apr  2 14:02:01 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 16 to pending delivery queue
Apr  2 14:02:01 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 16
Apr  2 14:02:01 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 16
Apr  2 14:02:01 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 16
Apr  2 14:02:01 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 16
Apr  2 14:02:02 vm1 crmd[21079]:    debug: do_cl_join_query: Querying for a DC
Apr  2 14:02:02 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_START
Apr  2 14:02:02 vm1 crmd[21079]:    debug: crm_timer_start: Started Election Trigger (I_DC_TIMEOUT:20000ms), src=14
Apr  2 14:02:02 vm1 crmd[21079]:    debug: pcmk_cpg_membership: Member[0] 224766144 
Apr  2 14:02:02 vm1 crmd[21079]:    debug: pcmk_cpg_membership: Member[1] 241543360 
Apr  2 14:02:02 vm1 crmd[21079]:    debug: te_connect_stonith: Attempting connection to fencing daemon...
Apr  2 14:02:02 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17dd8f0
Apr  2 14:02:02 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:02 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 16 to 17
Apr  2 14:02:02 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 17 to pending delivery queue
Apr  2 14:02:02 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 17
Apr  2 14:02:02 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 17
Apr  2 14:02:02 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 17
Apr  2 14:02:02 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 17
Apr  2 14:02:02 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 17
Apr  2 14:02:02 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 18
Apr  2 14:02:02 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 17 to 18
Apr  2 14:02:02 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 18 to pending delivery queue
Apr  2 14:02:02 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 18
Apr  2 14:02:02 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 18
Apr  2 14:02:02 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 18
Apr  2 14:02:02 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 18
Apr  2 14:02:03 vm1 crmd[21079]:    debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/st_command
Apr  2 14:02:03 vm1 crmd[21079]:    debug: get_stonith_token: Obtained registration token: da963fda-8ae3-417e-89b5-34537fc7f6f2
Apr  2 14:02:03 vm1 crmd[21079]:    debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/st_callback
Apr  2 14:02:03 vm1 crmd[21079]:    debug: get_stonith_token: Obtained registration token: 51fbc45e-7da0-4ebd-8fc5-13dfd18e329a
Apr  2 14:02:03 vm1 crmd[21079]:    debug: stonith_api_signon: Connection to STONITH successful
Apr  2 14:02:03 vm1 stonith-ng[21075]:    debug: stonith_command: Processing register from crmd (               0)
Apr  2 14:02:03 vm1 stonith-ng[21075]:    debug: stonith_command: Processing st_notify from 21079 (               0)
Apr  2 14:02:03 vm1 stonith-ng[21075]:    debug: stonith_command: Setting st_notify_disconnect callbacks for 21079 (51fbc45e-7da0-4ebd-8fc5-13dfd18e329a): ON
Apr  2 14:02:03 vm1 stonith-ng[21075]:    debug: stonith_command: Processing st_notify from 21079 (               0)
Apr  2 14:02:03 vm1 stonith-ng[21075]:    debug: stonith_command: Setting st_fence callbacks for 21079 (51fbc45e-7da0-4ebd-8fc5-13dfd18e329a): ON
Apr  2 14:02:05 vm1 attrd[21077]:    debug: cib_connect: CIB signon attempt 1
Apr  2 14:02:05 vm1 attrd[21077]:    debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Apr  2 14:02:05 vm1 attrd[21077]:    debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Apr  2 14:02:05 vm1 attrd[21077]:    debug: cib_native_signon_raw: Connection to CIB successful
Apr  2 14:02:05 vm1 attrd[21077]:     info: cib_connect: Connected to the CIB after 1 signon attempts
Apr  2 14:02:05 vm1 attrd[21077]:     info: cib_connect: Sending full refresh
Apr  2 14:02:05 vm1 cib[21074]:    debug: cib_common_callback_worker: Setting cib_refresh_notify callbacks for 21077 (ba1dc5df-2e67-4493-a6ea-c018311b63c2): on
Apr  2 14:02:22 vm1 crmd[21079]:     info: crm_timer_popped: Election Trigger (I_DC_TIMEOUT) just popped (20000ms)
Apr  2 14:02:22 vm1 crmd[21079]:    debug: s_crmd_fsa: Processing I_DC_TIMEOUT: [ state=S_PENDING cause=C_TIMER_POPPED origin=crm_timer_popped ]
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_WARN  
Apr  2 14:02:22 vm1 crmd[21079]:  warning: do_log: FSA: Input I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING
Apr  2 14:02:22 vm1 crmd[21079]:   notice: do_state_transition: State transition S_PENDING -> S_ELECTION [ input=I_DC_TIMEOUT cause=C_TIMER_POPPED origin=crm_timer_popped ]
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Apr  2 14:02:22 vm1 crmd[21079]:    debug: crm_uptime: Current CPU usage is: 0s, 15997us
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_election_vote: Started election 2
Apr  2 14:02:22 vm1 crmd[21079]:    debug: crm_timer_start: Started Election Timeout (I_ELECTION_DC:120000ms), src=16
Apr  2 14:02:22 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17dd8f0
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 18 to 19
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 19 to pending delivery queue
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_election_count_vote: Created voted hash
Apr  2 14:02:22 vm1 crmd[21079]:    debug: crm_uptime: Current CPU usage is: 0s, 15997us
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_election_count_vote: Election 2 (current: 2, owner: 224766144): Processed vote from vm1 (Recorded)
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_election_check: Still waiting on 1 non-votes (2 total)
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 19
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 19
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 19
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 19
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 1a
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 19 to 1a
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 1a to pending delivery queue
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr  2 14:02:22 vm1 crmd[21079]:    debug: crm_uptime: Current CPU usage is: 0s, 15997us
Apr  2 14:02:22 vm1 crmd[21079]:    debug: crm_compare_age: Win: 15997 vs 0  (usec)
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_election_count_vote: Election 2 (current: 2, owner: 224766144): Processed no-vote from vm2 (Recorded)
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_election_check: Destroying voted hash
Apr  2 14:02:22 vm1 crmd[21079]:    debug: s_crmd_fsa: Processing I_ELECTION_DC: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_election_check ]
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_LOG   
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_log: FSA: Input I_ELECTION_DC from do_election_check() received in state S_ELECTION
Apr  2 14:02:22 vm1 crmd[21079]:   notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_TE_START
Apr  2 14:02:22 vm1 crmd[21079]:     info: do_te_control: Registering TE UUID: c7106bb4-e73e-4614-ab19-3be6a9a6cdab
Apr  2 14:02:22 vm1 crmd[21079]:     info: set_graph_functions: Setting custom graph functions
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_te_control: Transitioner is now active
Apr  2 14:02:22 vm1 crmd[21079]:    debug: unpack_graph: Unpacked transition -1: 0 actions in 0 synapses
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_PE_START
Apr  2 14:02:22 vm1 crmd[21079]:    debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pengine
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_START
Apr  2 14:02:22 vm1 crmd[21079]:    debug: crm_timer_start: Started Integration Timer (I_INTEGRATED:180000ms), src=19
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_DC_TAKEOVER
Apr  2 14:02:22 vm1 cib[21074]:    debug: cib_common_callback_worker: Setting cib_diff_notify callbacks for 21079 (d6180752-f4eb-4379-9a46-69e23c08ddc6): on
Apr  2 14:02:22 vm1 crmd[21079]:     info: do_dc_takeover: Taking over DC status for this partition
Apr  2 14:02:22 vm1 cib[21074]:     info: cib_process_readwrite: We are now in R/W mode
Apr  2 14:02:22 vm1 cib[21074]:     info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/5, version=0.0.1): ok (rc=0)
Apr  2 14:02:22 vm1 cib[21074]:     info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/6, version=0.0.2): ok (rc=0)
Apr  2 14:02:22 vm1 cib[21074]:    debug: cib_process_xpath: cib_query: //cib/configuration/crm_config//cluster_property_set//nvpair[@name='dc-version'] does not exist
Apr  2 14:02:22 vm1 cib[21074]:    debug: cib_process_xpath: Processing cib_query op for /cib (/cib)
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 1a
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 1a
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 1a
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 19
Apr  2 14:02:22 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:22 vm1 cib[21074]:    debug: activateCibXml: Triggering CIB write for cib_modify op
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 1a
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 1a to 1d
Apr  2 14:02:22 vm1 cib[21074]:     info: cib:diff: - <cib admin_epoch="0" epoch="0" num_updates="2" />
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 1b to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 1c to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 1d to pending delivery queue
Apr  2 14:02:22 vm1 cib[21074]:     info: cib:diff: + <cib epoch="1" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" >
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 1b
Apr  2 14:02:22 vm1 cib[21074]:     info: cib:diff: +   <configuration >
Apr  2 14:02:22 vm1 cib[21074]:     info: cib:diff: +     <crm_config >
Apr  2 14:02:22 vm1 cib[21074]:     info: cib:diff: +       <cluster_property_set id="cib-bootstrap-options" __crm_diff_marker__="added:top" >
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 1b
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 1b
Apr  2 14:02:22 vm1 cib[21074]:     info: cib:diff: +         <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.7-7172b73" />
Apr  2 14:02:22 vm1 cib[21074]:     info: cib:diff: +       </cluster_property_set>
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 1b
Apr  2 14:02:22 vm1 cib[21074]:     info: cib:diff: +     </crm_config>
Apr  2 14:02:22 vm1 cib[21074]:     info: cib:diff: +   </configuration>
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 1c
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 1c
Apr  2 14:02:22 vm1 cib[21074]:     info: cib:diff: + </cib>
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 1c
Apr  2 14:02:22 vm1 cib[21074]:     info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/9, version=0.1.1): ok (rc=0)
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 1c
Apr  2 14:02:22 vm1 cib[21074]:    debug: cib_process_xpath: cib_query: //cib/configuration/crm_config//cluster_property_set//nvpair[@name='cluster-infrastructure'] does not exist
Apr  2 14:02:22 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 1d
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 1d
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 1d
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 1d
Apr  2 14:02:22 vm1 cib[21074]:    debug: cib_process_xpath: Processing cib_query op for /cib (/cib)
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 1d to 1f
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_OFFER_ALL
Apr  2 14:02:22 vm1 crmd[21079]:    debug: initialize_join: join-1: Initializing join data (flag=true)
Apr  2 14:02:22 vm1 crmd[21079]:     info: join_make_offer: Making join offers based on membership 8
Apr  2 14:02:22 vm1 crmd[21079]:    debug: join_make_offer: join-1: Sending offer to vm1
Apr  2 14:02:22 vm1 crmd[21079]:    debug: join_make_offer: join-1: Sending offer to vm2
Apr  2 14:02:22 vm1 crmd[21079]:     info: do_dc_join_offer_all: join-1: Waiting on 2 outstanding join acks
Apr  2 14:02:22 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:22 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 1e to pending delivery queue
Apr  2 14:02:22 vm1 cib[21074]:    debug: activateCibXml: Triggering CIB write for cib_modify op
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 1f to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17dd8f0
Apr  2 14:02:22 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17dd8f0
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 1e
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 1e
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 1e
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 1e
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 1f
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 1f
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 1f
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 1f
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 1d
Apr  2 14:02:22 vm1 cib[21074]:     info: cib:diff: - <cib admin_epoch="0" epoch="1" num_updates="1" />
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 1f to 21
Apr  2 14:02:22 vm1 cib[21074]:     info: cib:diff: + <cib epoch="2" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="vm1" update-client="crmd" cib-last-written="Mon Apr  2 14:02:22 2012" >
Apr  2 14:02:22 vm1 cib[21074]:     info: cib:diff: +   <configuration >
Apr  2 14:02:22 vm1 cib[21074]:     info: cib:diff: +     <crm_config >
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 20 to pending delivery queue
Apr  2 14:02:22 vm1 cib[21074]:     info: cib:diff: +       <cluster_property_set id="cib-bootstrap-options" >
Apr  2 14:02:22 vm1 crmd[21079]:    debug: handle_request: Raising I_JOIN_OFFER: join-1
Apr  2 14:02:22 vm1 crmd[21079]:    debug: s_crmd_fsa: Processing I_JOIN_OFFER: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_REQUEST
Apr  2 14:02:22 vm1 crmd[21079]:     info: update_dc: Set DC to vm1 (3.0.6)
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_cl_join_offer_respond: do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr  2 14:02:22 vm1 cib[21074]:     info: cib:diff: +         <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync" __crm_diff_marker__="added:top" />
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 21 to pending delivery queue
Apr  2 14:02:22 vm1 cib[21074]:     info: cib:diff: +       </cluster_property_set>
Apr  2 14:02:22 vm1 cib[21074]:     info: cib:diff: +     </crm_config>
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 20
Apr  2 14:02:22 vm1 cib[21074]:     info: cib:diff: +   </configuration>
Apr  2 14:02:22 vm1 cib[21074]:     info: cib:diff: + </cib>
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 20
Apr  2 14:02:22 vm1 cib[21074]:     info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/12, version=0.2.1): ok (rc=0)
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 20
Apr  2 14:02:22 vm1 crmd[21079]:    debug: config_query_callback: Call 13 : Parsing CIB options
Apr  2 14:02:22 vm1 crmd[21079]:    debug: config_query_callback: Shutdown escalation occurs after: 1200000ms
Apr  2 14:02:22 vm1 crmd[21079]:    debug: config_query_callback: Checking for expired actions every 900000ms
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 21
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 21
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 20
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 21
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 21
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 1f
Apr  2 14:02:22 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 crmd[21079]:    debug: config_query_callback: Call 14 : Parsing CIB options
Apr  2 14:02:22 vm1 crmd[21079]:    debug: config_query_callback: Shutdown escalation occurs after: 1200000ms
Apr  2 14:02:22 vm1 crmd[21079]:    debug: config_query_callback: Checking for expired actions every 900000ms
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 22
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 21 to 22
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 22 to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 22
Apr  2 14:02:22 vm1 crmd[21079]:    debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_REQ
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_dc_join_filter_offer: Processing req from vm2
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_dc_join_filter_offer: join-1: Welcoming node vm2 (ref join_request-crmd-1333342942-3)
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_dc_join_filter_offer: 1 nodes have been integrated into join-1
Apr  2 14:02:22 vm1 crmd[21079]:    debug: check_join_state: Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_dc_join_filter_offer: join-1: Still waiting on 1 outstanding offers
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 22
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 22
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 21
Apr  2 14:02:22 vm1 crmd[21079]:    debug: config_query_callback: Call 15 : Parsing CIB options
Apr  2 14:02:22 vm1 crmd[21079]:    debug: config_query_callback: Shutdown escalation occurs after: 1200000ms
Apr  2 14:02:22 vm1 crmd[21079]:    debug: config_query_callback: Checking for expired actions every 900000ms
Apr  2 14:02:22 vm1 crmd[21079]:    debug: join_query_callback: Respond to join offer join-1
Apr  2 14:02:22 vm1 crmd[21079]:    debug: join_query_callback: Acknowledging vm1 as our DC
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 22 to 24
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 23 to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 24 to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17dd8f0
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 23
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 23
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 23
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 23
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 24
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 24
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 24
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 24
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 22
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 24 to 25
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 25 to pending delivery queue
Apr  2 14:02:22 vm1 crmd[21079]:    debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_REQ
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_dc_join_filter_offer: Processing req from vm1
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_dc_join_filter_offer: vm1 has a better generation number than the current max vm2
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_dc_join_filter_offer: join-1: Welcoming node vm1 (ref join_request-crmd-1333342942-5)
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_dc_join_filter_offer: 2 nodes have been integrated into join-1
Apr  2 14:02:22 vm1 crmd[21079]:    debug: check_join_state: Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Apr  2 14:02:22 vm1 crmd[21079]:    debug: check_join_state: join-1: Integration of 2 peers complete: do_dc_join_filter_offer
Apr  2 14:02:22 vm1 crmd[21079]:    debug: s_crmd_fsa: Processing I_INTEGRATED: [ state=S_INTEGRATION cause=C_FSA_INTERNAL origin=check_join_state ]
Apr  2 14:02:22 vm1 crmd[21079]:   notice: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_state_transition: All 2 cluster nodes responded to the join offer.
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_START
Apr  2 14:02:22 vm1 crmd[21079]:    debug: crm_timer_start: Started Finalization Timer (I_ELECTION:1800000ms), src=25
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_FINALIZE
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_dc_join_finalize: Finializing join-1 for 2 clients
Apr  2 14:02:22 vm1 crmd[21079]:     info: do_dc_join_finalize: join-1: Syncing the CIB from vm1 to the rest of the cluster
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 25
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 25
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 25
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 25
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 24
Apr  2 14:02:22 vm1 cib[21074]:    debug: sync_our_cib: Syncing CIB to all peers
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 25
Apr  2 14:02:22 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 25 to 27
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 26 to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 27 to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 26
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 26
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 26
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 26
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 27
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 27
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 27
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 27
Apr  2 14:02:22 vm1 cib[21074]:     info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/17, version=0.2.1): ok (rc=0)
Apr  2 14:02:22 vm1 crmd[21079]:    debug: check_join_state: Invoked by finalize_sync_callback in state: S_FINALIZE_JOIN
Apr  2 14:02:22 vm1 crmd[21079]:    debug: check_join_state: join-1: Still waiting on 2 integrated nodes
Apr  2 14:02:22 vm1 crmd[21079]:    debug: finalize_sync_callback: Notifying 2 clients of join-1 results
Apr  2 14:02:22 vm1 crmd[21079]:    debug: finalize_join_for: join-1: ACK'ing join request from vm1, state member
Apr  2 14:02:22 vm1 crmd[21079]:    debug: finalize_join_for: join-1: ACK'ing join request from vm2, state member
Apr  2 14:02:22 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17dd8f0
Apr  2 14:02:22 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17dd8f0
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 27 to 29
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 28 to pending delivery queue
Apr  2 14:02:22 vm1 crmd[21079]:    debug: handle_request: Raising I_JOIN_RESULT: join-1
Apr  2 14:02:22 vm1 crmd[21079]:    debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_cl_join_finalize_respond: Confirming join join-1: join_ack_nack
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_cl_join_finalize_respond: join-1: Join complete.  Sending local LRM status to vm1
Apr  2 14:02:22 vm1 crmd[21079]:     info: erase_status_tag: Deleting xpath: //node_state[@uname='vm1']/transient_attributes
Apr  2 14:02:22 vm1 crmd[21079]:     info: update_attrd: Connecting to attrd...
Apr  2 14:02:22 vm1 crmd[21079]:    debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Apr  2 14:02:22 vm1 crmd[21079]:    debug: attrd_update_delegate: Sent update: terminate=(null) for vm1
Apr  2 14:02:22 vm1 crmd[21079]:    debug: attrd_update_delegate: Sent update: shutdown=(null) for vm1
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Apr  2 14:02:22 vm1 attrd[21077]:     info: attrd_local_callback: DEBUG: [crmd,update,terminate,(null),vm1],[vm1]
Apr  2 14:02:22 vm1 attrd[21077]:    debug: attrd_local_callback: update message from crmd: terminate=<null>
Apr  2 14:02:22 vm1 attrd[21077]:     info: find_hash_entry: Creating hash entry for terminate
Apr  2 14:02:22 vm1 attrd[21077]:    debug: attrd_local_callback: Supplied: (null), Current: (null), Stored: (null)
Apr  2 14:02:22 vm1 attrd[21077]:     info: attrd_local_callback: DEBUG: [crmd,update,shutdown,(null),vm1],[vm1]
Apr  2 14:02:22 vm1 attrd[21077]:    debug: attrd_local_callback: update message from crmd: shutdown=<null>
Apr  2 14:02:22 vm1 attrd[21077]:     info: find_hash_entry: Creating hash entry for shutdown
Apr  2 14:02:22 vm1 attrd[21077]:    debug: attrd_local_callback: Supplied: (null), Current: (null), Stored: (null)
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_dc_join_ack: Ignoring op=join_ack_nack message from vm1
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 29 to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17dd8f0
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 28
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 28
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 28
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 28
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 29
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 29
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 29
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 29
Apr  2 14:02:22 vm1 cib[21074]:    debug: activateCibXml: Triggering CIB write for cib_modify op
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 27
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 29 to 2b
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 2a to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 2b to pending delivery queue
Apr  2 14:02:22 vm1 crmd[21079]:    debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Apr  2 14:02:22 vm1 crmd[21079]:     info: do_dc_join_ack: join-1: Updating node state to member for vm1
Apr  2 14:02:22 vm1 crmd[21079]:     info: erase_status_tag: Deleting xpath: //node_state[@uname='vm1']/lrm
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_dc_join_ack: join-1: Registered callback for LRM update 22
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 2a
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 2a
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 2a
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 2a
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 2b
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 2b
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 2b
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 2b
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 2c
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 2b to 2c
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 2c to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 2c
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 2d
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 2c to 2d
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 2d to pending delivery queue
Apr  2 14:02:22 vm1 crmd[21079]:    debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Apr  2 14:02:22 vm1 crmd[21079]:     info: do_dc_join_ack: join-1: Updating node state to member for vm2
Apr  2 14:02:22 vm1 crmd[21079]:     info: erase_status_tag: Deleting xpath: //node_state[@uname='vm2']/lrm
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_dc_join_ack: join-1: Registered callback for LRM update 24
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 2c
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 2d
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 2d
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 2c
Apr  2 14:02:22 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 2d
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 29
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 2d
Apr  2 14:02:22 vm1 cib[21074]:     info: cib:diff: - <cib admin_epoch="0" epoch="2" num_updates="1" />
Apr  2 14:02:22 vm1 cib[21074]:     info: cib:diff: + <cib epoch="3" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="vm1" update-client="crmd" cib-last-written="Mon Apr  2 14:02:22 2012" >
Apr  2 14:02:22 vm1 cib[21074]:     info: cib:diff: +   <configuration >
Apr  2 14:02:22 vm1 cib[21074]:     info: cib:diff: +     <nodes >
Apr  2 14:02:22 vm1 cib[21074]:     info: cib:diff: +       <node id="224766144" uname="vm1" type="normal" __crm_diff_marker__="added:top" />
Apr  2 14:02:22 vm1 cib[21074]:     info: cib:diff: +     </nodes>
Apr  2 14:02:22 vm1 cib[21074]:     info: cib:diff: +   </configuration>
Apr  2 14:02:22 vm1 cib[21074]:     info: cib:diff: + </cib>
Apr  2 14:02:22 vm1 cib[21074]:     info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/18, version=0.3.1): ok (rc=0)
Apr  2 14:02:22 vm1 cib[21074]:    debug: activateCibXml: Triggering CIB write for cib_modify op
Apr  2 14:02:22 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 2d to 2f
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 2e to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 2f to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 2e
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 2e
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 2e
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 2e
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 2f
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 2f
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 2f
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 2f
Apr  2 14:02:22 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:22 vm1 cib[21074]:     info: cib:diff: - <cib admin_epoch="0" epoch="3" num_updates="1" />
Apr  2 14:02:22 vm1 cib[21074]:     info: cib:diff: + <cib epoch="4" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="vm1" update-client="crmd" cib-last-written="Mon Apr  2 14:02:22 2012" >
Apr  2 14:02:22 vm1 cib[21074]:     info: cib:diff: +   <configuration >
Apr  2 14:02:22 vm1 cib[21074]:     info: cib:diff: +     <nodes >
Apr  2 14:02:22 vm1 cib[21074]:     info: cib:diff: +       <node id="241543360" uname="vm2" type="normal" __crm_diff_marker__="added:top" />
Apr  2 14:02:22 vm1 cib[21074]:     info: cib:diff: +     </nodes>
Apr  2 14:02:22 vm1 cib[21074]:     info: cib:diff: +   </configuration>
Apr  2 14:02:22 vm1 cib[21074]:     info: cib:diff: + </cib>
Apr  2 14:02:22 vm1 cib[21074]:     info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/19, version=0.4.1): ok (rc=0)
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 2f
Apr  2 14:02:22 vm1 cib[21074]:    debug: cib_process_xpath: //node_state[@uname='vm1']/transient_attributes was already removed
Apr  2 14:02:22 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 2f to 31
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 30 to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 31 to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 30
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 30
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 30
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 30
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 31
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 31
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 31
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 31
Apr  2 14:02:22 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:22 vm1 cib[21074]:     info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vm1']/transient_attributes (origin=local/crmd/20, version=0.4.2): ok (rc=0)
Apr  2 14:02:22 vm1 crmd[21079]:    debug: erase_xpath_callback: Deletion of "//node_state[@uname='vm1']/transient_attributes": ok (rc=0)
Apr  2 14:02:22 vm1 cib[21074]:    debug: cib_process_xpath: //node_state[@uname='vm1']/lrm was already removed
Apr  2 14:02:22 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 31
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 31 to 33
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 32 to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 33 to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 32
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 32
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 32
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 32
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 33
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 33
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 33
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 33
Apr  2 14:02:22 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:22 vm1 cib[21074]:     info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vm1']/lrm (origin=local/crmd/21, version=0.4.3): ok (rc=0)
Apr  2 14:02:22 vm1 crmd[21079]:    debug: erase_xpath_callback: Deletion of "//node_state[@uname='vm1']/lrm": ok (rc=0)
Apr  2 14:02:22 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 33
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 33 to 35
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 34 to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 35 to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 34
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 34
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 34
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 34
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 35
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 35
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 35
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 35
Apr  2 14:02:22 vm1 cib[21074]:    debug: cib_process_xpath: //node_state[@uname='vm2']/transient_attributes was already removed
Apr  2 14:02:22 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:22 vm1 cib[21074]:     info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vm2']/transient_attributes (origin=vm2/crmd/9, version=0.4.4): ok (rc=0)
Apr  2 14:02:22 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 35
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 35 to 37
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 36 to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 37 to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 36
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 36
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 36
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 36
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 37
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 37
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 37
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 37
Apr  2 14:02:22 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:22 vm1 crmd[21079]:    debug: join_update_complete_callback: Join update 22 complete
Apr  2 14:02:22 vm1 crmd[21079]:    debug: check_join_state: Invoked by join_update_complete_callback in state: S_FINALIZE_JOIN
Apr  2 14:02:22 vm1 crmd[21079]:    debug: check_join_state: join-1 complete: join_update_complete_callback
Apr  2 14:02:22 vm1 crmd[21079]:    debug: s_crmd_fsa: Processing I_FINALIZED: [ state=S_FINALIZE_JOIN cause=C_FSA_INTERNAL origin=check_join_state ]
Apr  2 14:02:22 vm1 crmd[21079]:   notice: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Apr  2 14:02:22 vm1 crmd[21079]:    debug: ghash_update_cib_node: Updating vm1: true (overwrite=true) hash_size=2
Apr  2 14:02:22 vm1 crmd[21079]:    debug: ghash_update_cib_node: Updating vm2: true (overwrite=true) hash_size=2
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_state_transition: All 2 cluster nodes are eligible to run resources.
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_FINAL
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_dc_join_final: Ensuring DC, quorum and node attributes are up-to-date
Apr  2 14:02:22 vm1 crmd[21079]:    debug: attrd_update_delegate: Sent update: (null)=(null) for localhost
Apr  2 14:02:22 vm1 crmd[21079]:    debug: crm_update_quorum: Updating quorum status to true (call=27)
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_TE_CANCEL
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_te_invoke: Cancelling the transition: inactive
Apr  2 14:02:22 vm1 crmd[21079]:     info: abort_transition_graph: do_te_invoke:162 - Triggered transition abort (complete=1) : Peer Cancelled
Apr  2 14:02:22 vm1 crmd[21079]:    debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_pe_invoke: Query 28: Requesting the current CIB: S_POLICY_ENGINE
Apr  2 14:02:22 vm1 attrd[21077]:   notice: attrd_local_callback: Sending full refresh (origin=crmd)
Apr  2 14:02:22 vm1 cib[21074]:    debug: cib_process_xpath: //node_state[@uname='vm2']/lrm was already removed
Apr  2 14:02:22 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 37
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 37 to 39
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 38 to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 39 to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 38
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 38
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 38
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 38
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 39
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 39
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 39
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 39
Apr  2 14:02:22 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:22 vm1 crmd[21079]:    debug: te_update_diff: Processing diff (cib_delete): 0.4.5 -> 0.4.6 (S_POLICY_ENGINE)
Apr  2 14:02:22 vm1 cib[21074]:     info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vm2']/lrm (origin=local/crmd/23, version=0.4.6): ok (rc=0)
Apr  2 14:02:22 vm1 crmd[21079]:    debug: erase_xpath_callback: Deletion of "//node_state[@uname='vm2']/lrm": ok (rc=0)
Apr  2 14:02:22 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 39
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 39 to 3b
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 3a to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 3b to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 3a
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 3a
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 3a
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 3a
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 3b
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 3b
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 3b
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 3b
Apr  2 14:02:22 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:22 vm1 crmd[21079]:    debug: te_update_diff: Processing diff (cib_modify): 0.4.6 -> 0.4.7 (S_POLICY_ENGINE)
Apr  2 14:02:22 vm1 crmd[21079]:    debug: join_update_complete_callback: Join update 24 complete
Apr  2 14:02:22 vm1 crmd[21079]:    debug: check_join_state: Invoked by join_update_complete_callback in state: S_POLICY_ENGINE
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 3b
Apr  2 14:02:22 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:22 vm1 crmd[21079]:    debug: te_update_diff: Processing diff (cib_modify): 0.4.7 -> 0.4.8 (S_POLICY_ENGINE)
Apr  2 14:02:22 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 3b to 3d
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 3c to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 3d to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 3c
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 3c
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 3c
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 3c
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 3d
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 3d
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 3d
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 3d
Apr  2 14:02:22 vm1 cib[21074]:     info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/25, version=0.4.8): ok (rc=0)
Apr  2 14:02:22 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:22 vm1 crmd[21079]:    debug: te_update_diff: Processing diff (cib_modify): 0.4.8 -> 0.4.9 (S_POLICY_ENGINE)
Apr  2 14:02:22 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 3d to 3f
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 3e to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 3f to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 3e
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 3e
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 3e
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 3e
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 3f
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 3f
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 3f
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 3f
Apr  2 14:02:22 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:22 vm1 crmd[21079]:    debug: te_update_diff: Processing diff (cib_modify): 0.4.9 -> 0.4.10 (S_POLICY_ENGINE)
Apr  2 14:02:22 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 3d
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 3f to 41
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 40 to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 41 to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 40
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 40
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 40
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 40
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 41
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 41
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 41
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 41
Apr  2 14:02:22 vm1 cib[21074]:     info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/27, version=0.4.10): ok (rc=0)
Apr  2 14:02:22 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_pe_invoke_callback: Invoking the PE: query=28, ref=pe_calc-dc-1333342942-9, seq=8, quorate=1
Apr  2 14:02:22 vm1 pengine[21078]:     info: unpack_config: Startup probes: enabled
Apr  2 14:02:22 vm1 pengine[21078]:    debug: unpack_config: STONITH timeout: 60000
Apr  2 14:02:22 vm1 pengine[21078]:    debug: unpack_config: STONITH of failed nodes is enabled
Apr  2 14:02:22 vm1 pengine[21078]:    debug: unpack_config: Stop all active resources: false
Apr  2 14:02:22 vm1 pengine[21078]:    debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Apr  2 14:02:22 vm1 pengine[21078]:    debug: unpack_config: Default stickiness: 0
Apr  2 14:02:22 vm1 pengine[21078]:    debug: unpack_config: On loss of CCM Quorum: Stop ALL resources
Apr  2 14:02:22 vm1 pengine[21078]:     info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Apr  2 14:02:22 vm1 pengine[21078]:     info: unpack_domains: Unpacking domains
Apr  2 14:02:22 vm1 pengine[21078]:    error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
Apr  2 14:02:22 vm1 pengine[21078]:    error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
Apr  2 14:02:22 vm1 pengine[21078]:    error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
Apr  2 14:02:22 vm1 pengine[21078]:     info: determine_online_status: Node vm1 is online
Apr  2 14:02:22 vm1 pengine[21078]:     info: determine_online_status: Node vm2 is online
Apr  2 14:02:22 vm1 pengine[21078]:   notice: stage6: Delaying fencing operations until there are resources to manage
Apr  2 14:02:22 vm1 pengine[21078]:    debug: get_last_sequence: Series file /var/lib/pengine/pe-input.last does not exist
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 3f
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 41 to 43
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 42 to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 43 to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 42
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 42
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 42
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 42
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 43
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 43
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 43
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 43
Apr  2 14:02:22 vm1 crmd[21079]:    debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_LOG   
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_log: FSA: Input I_PE_SUCCESS from handle_response() received in state S_POLICY_ENGINE
Apr  2 14:02:22 vm1 crmd[21079]:   notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_TE_INVOKE
Apr  2 14:02:22 vm1 crmd[21079]:    debug: unpack_graph: Unpacked transition 0: 2 actions in 2 synapses
Apr  2 14:02:22 vm1 crmd[21079]:     info: do_te_invoke: Processing graph 0 (ref=pe_calc-dc-1333342942-9) derived from /var/lib/pengine/pe-input-0.bz2
Apr  2 14:02:22 vm1 crmd[21079]:     info: te_rsc_command: Initiating action 3: probe_complete probe_complete on vm2 - no waiting
Apr  2 14:02:22 vm1 crmd[21079]:     info: te_rsc_command: Initiating action 2: probe_complete probe_complete on vm1 (local) - no waiting
Apr  2 14:02:22 vm1 crmd[21079]:    debug: attrd_update_delegate: Sent update: probe_complete=true for localhost
Apr  2 14:02:22 vm1 crmd[21079]:    debug: run_graph: ==== Transition 0 (Complete=0, Pending=0, Fired=2, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-0.bz2): In-progress
Apr  2 14:02:22 vm1 crmd[21079]:   notice: run_graph: ==== Transition 0 (Complete=2, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-0.bz2): Complete
Apr  2 14:02:22 vm1 crmd[21079]:    debug: te_graph_trigger: Transition 0 is now complete
Apr  2 14:02:22 vm1 crmd[21079]:    debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE
Apr  2 14:02:22 vm1 crmd[21079]:    debug: notify_crmd: Transition 0 status: done - <null>
Apr  2 14:02:22 vm1 crmd[21079]:    debug: s_crmd_fsa: Processing I_TE_SUCCESS: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_LOG   
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_log: FSA: Input I_TE_SUCCESS from notify_crmd() received in state S_TRANSITION_ENGINE
Apr  2 14:02:22 vm1 crmd[21079]:   notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Apr  2 14:02:22 vm1 attrd[21077]:     info: attrd_local_callback: DEBUG: [crmd,update,probe_complete,true,(null)],[vm1]
Apr  2 14:02:22 vm1 attrd[21077]:    debug: attrd_local_callback: update message from crmd: probe_complete=true
Apr  2 14:02:22 vm1 attrd[21077]:     info: find_hash_entry: Creating hash entry for probe_complete
Apr  2 14:02:22 vm1 attrd[21077]:    debug: attrd_local_callback: Supplied: true, Current: (null), Stored: (null)
Apr  2 14:02:22 vm1 attrd[21077]:    debug: attrd_local_callback: New value of probe_complete is true
Apr  2 14:02:22 vm1 attrd[21077]:   notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_state_transition: Starting PEngine Recheck Timer
Apr  2 14:02:22 vm1 crmd[21079]:    debug: crm_timer_start: Started PEngine Recheck Timer (I_PE_CALC:900000ms), src=37
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Apr  2 14:02:22 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr  2 14:02:22 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17dd8f0
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 41
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 43 to 44
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 44 to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 44
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 44
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 44
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 44
Apr  2 14:02:22 vm1 cib[21074]:    debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='224766144']//transient_attributes//nvpair[@name='probe_complete'] does not exist
Apr  2 14:02:22 vm1 cib[21074]:    debug: cib_process_xpath: Processing cib_query op for /cib (/cib)
Apr  2 14:02:22 vm1 attrd[21077]:   notice: attrd_perform_update: Sent update 4: probe_complete=true
Apr  2 14:02:22 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x16d5350
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 43
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 44 to 45
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 45 to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 45
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 45
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 45
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 45
Apr  2 14:02:22 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:22 vm1 crmd[21079]:    debug: te_update_diff: Processing diff (cib_modify): 0.4.10 -> 0.4.11 (S_IDLE)
Apr  2 14:02:22 vm1 attrd[21077]:    debug: attrd_cib_callback: Update 4 for probe_complete=true passed
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 46
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 45 to 46
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 46 to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 46
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 46
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 46
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 44
Apr  2 14:02:22 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 cib[21074]:    debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='224766144']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair)
Apr  2 14:02:22 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:22 vm1 crmd[21079]:    debug: te_update_diff: Processing diff (cib_modify): 0.4.11 -> 0.4.12 (S_IDLE)
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 47
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 46 to 47
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 47 to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 47
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 48
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 47 to 48
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 48 to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 47
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 48
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 48
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 47
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 48
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 46
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 48 to 4a
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 49 to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 4a to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 49
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 49
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 49
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 49
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 4a
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 4a
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 4a
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 4a
Apr  2 14:02:22 vm1 attrd[21077]:    debug: attrd_cib_callback: Update 6 for probe_complete=true passed
Apr  2 14:02:22 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:22 vm1 crmd[21079]:    debug: te_update_diff: Processing diff (cib_modify): 0.4.12 -> 0.4.13 (S_IDLE)
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 4b
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 4a to 4b
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 4b to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 4b
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 4c
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 4b to 4c
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 4c to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 4b
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 4c
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 4c
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 4b
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 4c
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 48
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 4c to 4e
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 4d to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 4e to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 4d
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 4d
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 4d
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 4d
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 4e
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 4e
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 4e
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 4e
Apr  2 14:02:22 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:22 vm1 crmd[21079]:    debug: te_update_diff: Processing diff (cib_modify): 0.4.13 -> 0.4.14 (S_IDLE)
Apr  2 14:02:22 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 4c
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 4e to 50
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 4f to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 50 to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 4f
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 4f
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 4f
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 4f
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 50
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 50
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 50
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 50
Apr  2 14:02:22 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 4e
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 50 to 52
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 51 to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 52 to pending delivery queue
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 51
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 51
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 51
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 51
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 52
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 52
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 52
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 52
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 50
Apr  2 14:02:22 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 52
Apr  2 14:02:22 vm1 pengine[21078]:   notice: process_pe_message: Transition 0: PEngine Input stored in: /var/lib/pengine/pe-input-0.bz2
Apr  2 14:02:22 vm1 pengine[21078]:   notice: process_pe_message: Configuration ERRORs found during PE processing.  Please run "crm_verify -L" to identify issues.
Apr  2 14:02:33 vm1 lrmd: [21076]: debug: on_msg_register:client lrmadmin [21122] registered
Apr  2 14:02:33 vm1 lrmd: [21076]: debug: on_receive_cmd: the IPC to client [pid:21122] disconnected.
Apr  2 14:02:33 vm1 lrmd: [21076]: debug: unregister_client: client lrmadmin [pid:21122] is unregistered
Apr  2 14:02:33 vm1 lrmd: [21076]: debug: on_msg_register:client lrmadmin [21123] registered
Apr  2 14:02:33 vm1 lrmd: [21076]: debug: on_receive_cmd: the IPC to client [pid:21123] disconnected.
Apr  2 14:02:33 vm1 lrmd: [21076]: debug: unregister_client: client lrmadmin [pid:21123] is unregistered
Apr  2 14:02:33 vm1 cib[21074]:    debug: activateCibXml: Triggering CIB write for cib_replace op
Apr  2 14:02:33 vm1 crmd[21079]:    debug: te_update_diff: Processing diff (cib_replace): 0.4.14 -> 0.5.1 (S_IDLE)
Apr  2 14:02:33 vm1 crmd[21079]:     info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.5.1) : Non-status change
Apr  2 14:02:33 vm1 crmd[21079]:    debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_IDLE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Apr  2 14:02:33 vm1 crmd[21079]:   notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Apr  2 14:02:33 vm1 crmd[21079]:    debug: do_state_transition: All 2 cluster nodes are eligible to run resources.
Apr  2 14:02:33 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr  2 14:02:33 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Apr  2 14:02:33 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr  2 14:02:33 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Apr  2 14:02:33 vm1 crmd[21079]:    debug: do_pe_invoke: Query 29: Requesting the current CIB: S_POLICY_ENGINE
Apr  2 14:02:33 vm1 cib[21074]:     info: cib_replace_notify: Replaced: 0.4.14 -> 0.5.1 from <null>
Apr  2 14:02:33 vm1 crmd[21079]:    debug: do_cib_replaced: Updating the CIB after a replace: DC=true
Apr  2 14:02:33 vm1 crmd[21079]:    debug: ghash_update_cib_node: Updating vm1: true (overwrite=true) hash_size=2
Apr  2 14:02:33 vm1 crmd[21079]:    debug: ghash_update_cib_node: Updating vm2: true (overwrite=true) hash_size=2
Apr  2 14:02:33 vm1 crmd[21079]:    debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Apr  2 14:02:33 vm1 crmd[21079]:   notice: do_state_transition: State transition S_POLICY_ENGINE -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Apr  2 14:02:33 vm1 crmd[21079]:    debug: update_dc: Unset DC. Was vm1
Apr  2 14:02:33 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr  2 14:02:33 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Apr  2 14:02:33 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr  2 14:02:33 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Apr  2 14:02:33 vm1 crmd[21079]:    debug: crm_uptime: Current CPU usage is: 0s, 31995us
Apr  2 14:02:33 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17dd8f0
Apr  2 14:02:33 vm1 crmd[21079]:    debug: do_election_vote: Started election 3
Apr  2 14:02:33 vm1 crmd[21079]:    debug: crm_timer_start: Started Election Timeout (I_ELECTION_DC:120000ms), src=42
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 52 to 53
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 53 to pending delivery queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 53
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 53
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 53
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 53
Apr  2 14:02:33 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr  2 14:02:33 vm1 crmd[21079]:    debug: do_election_count_vote: Created voted hash
Apr  2 14:02:33 vm1 crmd[21079]:    debug: crm_uptime: Current CPU usage is: 0s, 31995us
Apr  2 14:02:33 vm1 crmd[21079]:    debug: do_election_count_vote: Election 3 (current: 3, owner: 224766144): Processed vote from vm1 (Recorded)
Apr  2 14:02:33 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr  2 14:02:33 vm1 crmd[21079]:    debug: do_election_check: Still waiting on 1 non-votes (2 total)
Apr  2 14:02:33 vm1 attrd[21077]:     info: do_cib_replaced: Sending full refresh
Apr  2 14:02:33 vm1 attrd[21077]:   notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Apr  2 14:02:33 vm1 cib[21074]:     info: cib:diff: - <cib admin_epoch="0" epoch="4" num_updates="14" />
Apr  2 14:02:33 vm1 cib[21074]:     info: cib:diff: + <cib epoch="5" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="vm1" update-client="crmd" cib-last-written="Mon Apr  2 14:02:22 2012" have-quorum="1" dc-uuid="224766144" >
Apr  2 14:02:33 vm1 cib[21074]:     info: cib:diff: +   <configuration >
Apr  2 14:02:33 vm1 cib[21074]:     info: cib:diff: +     <crm_config >
Apr  2 14:02:33 vm1 cib[21074]:     info: cib:diff: +       <cluster_property_set id="cib-bootstrap-options" >
Apr  2 14:02:33 vm1 cib[21074]:     info: cib:diff: +         <nvpair id="cib-bootstrap-options-no-quorum-policy" name="no-quorum-policy" value="ignore" __crm_diff_marker__="added:top" />
Apr  2 14:02:33 vm1 cib[21074]:     info: cib:diff: +         <nvpair id="cib-bootstrap-options-stonith-enabled" name="stonith-enabled" value="false" __crm_diff_marker__="added:top" />
Apr  2 14:02:33 vm1 cib[21074]:     info: cib:diff: +         <nvpair id="cib-bootstrap-options-startup-fencing" name="startup-fencing" value="false" __crm_diff_marker__="added:top" />
Apr  2 14:02:33 vm1 cib[21074]:     info: cib:diff: +       </cluster_property_set>
Apr  2 14:02:33 vm1 cib[21074]:     info: cib:diff: +     </crm_config>
Apr  2 14:02:33 vm1 cib[21074]:     info: cib:diff: +     <resources >
Apr  2 14:02:33 vm1 cib[21074]:     info: cib:diff: +       <primitive class="ocf" id="prmDummy1" provider="pacemaker" type="Dummy" __crm_diff_marker__="added:top" >
Apr  2 14:02:33 vm1 cib[21074]:     info: cib:diff: +         <operations >
Apr  2 14:02:33 vm1 cib[21074]:     info: cib:diff: +           <op id="prmDummy1-start-0" interval="0" name="start" on-fail="restart" timeout="60s" />
Apr  2 14:02:33 vm1 cib[21074]:     info: cib:diff: +           <op id="prmDummy1-monitor-10s" interval="10s" name="monitor" on-fail="restart" timeout="60s" />
Apr  2 14:02:33 vm1 cib[21074]:     info: cib:diff: +           <op id="prmDummy1-stop-0" interval="0" name="stop" on-fail="block" timeout="60s" />
Apr  2 14:02:33 vm1 cib[21074]:     info: cib:diff: +         </operations>
Apr  2 14:02:33 vm1 cib[21074]:     info: cib:diff: +       </primitive>
Apr  2 14:02:33 vm1 cib[21074]:     info: cib:diff: +     </resources>
Apr  2 14:02:33 vm1 cib[21074]:     info: cib:diff: +     <constraints >
Apr  2 14:02:33 vm1 cib[21074]:     info: cib:diff: +       <rsc_location id="rsc_location-prmDummy1-1" rsc="prmDummy1" __crm_diff_marker__="added:top" >
Apr  2 14:02:33 vm1 cib[21074]:     info: cib:diff: +         <rule id="rsc_location-prmDummy1-1-rule" score="200" >
Apr  2 14:02:33 vm1 cib[21074]:     info: cib:diff: +           <expression attribute="#uname" id="rsc_location-prmDummy1-1-expression" operation="eq" value="vm1" />
Apr  2 14:02:33 vm1 cib[21074]:     info: cib:diff: +         </rule>
Apr  2 14:02:33 vm1 cib[21074]:     info: cib:diff: +       </rsc_location>
Apr  2 14:02:33 vm1 cib[21074]:     info: cib:diff: +     </constraints>
Apr  2 14:02:33 vm1 cib[21074]:     info: cib:diff: +     <rsc_defaults __crm_diff_marker__="added:top" >
Apr  2 14:02:33 vm1 cib[21074]:     info: cib:diff: +       <meta_attributes id="rsc-options" >
Apr  2 14:02:33 vm1 cib[21074]:     info: cib:diff: +         <nvpair id="rsc-options-resource-stickiness" name="resource-stickiness" value="INFINITY" />
Apr  2 14:02:33 vm1 cib[21074]:     info: cib:diff: +         <nvpair id="rsc-options-migration-threshold" name="migration-threshold" value="1" />
Apr  2 14:02:33 vm1 cib[21074]:     info: cib:diff: +       </meta_attributes>
Apr  2 14:02:33 vm1 cib[21074]:     info: cib:diff: +     </rsc_defaults>
Apr  2 14:02:33 vm1 cib[21074]:     info: cib:diff: +   </configuration>
Apr  2 14:02:33 vm1 cib[21074]:     info: cib:diff: + </cib>
Apr  2 14:02:33 vm1 cib[21074]:     info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=local/cibadmin/2, version=0.5.1): ok (rc=0)
Apr  2 14:02:33 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 54
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 53 to 54
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 54 to pending delivery queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 54
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 54
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 54
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 53
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 54 to 58
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 55 to pending delivery queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 56 to pending delivery queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 57 to pending delivery queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 58 to pending delivery queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 55
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 55
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 55
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 55
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 56
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 56
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 56
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 56
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 57
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 57
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 57
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 57
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 58
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 58
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 58
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 58
Apr  2 14:02:33 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Apr  2 14:02:33 vm1 crmd[21079]:    debug: crm_uptime: Current CPU usage is: 0s, 31995us
Apr  2 14:02:33 vm1 crmd[21079]:    debug: crm_compare_age: Win: 31995 vs 0  (usec)
Apr  2 14:02:33 vm1 crmd[21079]:    debug: do_election_count_vote: Election 3 (current: 3, owner: 224766144): Processed no-vote from vm2 (Recorded)
Apr  2 14:02:33 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Apr  2 14:02:33 vm1 crmd[21079]:    debug: do_election_check: Destroying voted hash
Apr  2 14:02:33 vm1 crmd[21079]:    debug: s_crmd_fsa: Processing I_ELECTION_DC: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_election_check ]
Apr  2 14:02:33 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_LOG   
Apr  2 14:02:33 vm1 crmd[21079]:    debug: do_log: FSA: Input I_ELECTION_DC from do_election_check() received in state S_ELECTION
Apr  2 14:02:33 vm1 crmd[21079]:   notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Apr  2 14:02:33 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_TE_START
Apr  2 14:02:33 vm1 crmd[21079]:    debug: do_te_control: The transitioner is already active
Apr  2 14:02:33 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_PE_START
Apr  2 14:02:33 vm1 crmd[21079]:    debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pengine
Apr  2 14:02:33 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr  2 14:02:33 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_START
Apr  2 14:02:33 vm1 crmd[21079]:    debug: crm_timer_start: Started Integration Timer (I_INTEGRATED:180000ms), src=44
Apr  2 14:02:33 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr  2 14:02:33 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_DC_TAKEOVER
Apr  2 14:02:33 vm1 crmd[21079]:     info: do_dc_takeover: Taking over DC status for this partition
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 54
Apr  2 14:02:33 vm1 cib[21074]:    debug: xmlfromIPC: Peer disconnected
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 59
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 58 to 59
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 59 to pending delivery queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 59
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 59
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 59
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 58
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 59
Apr  2 14:02:33 vm1 cib[21074]:    debug: sync_our_cib: Syncing CIB to vm2
Apr  2 14:02:33 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 59 to 5c
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 5a to pending delivery queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 5b to pending delivery queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 5c to pending delivery queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 5a
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 5a
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 5a
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 5a
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 5b
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 5b
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 5b
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 5b
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 5c
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 5c
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 5c
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 5c
Apr  2 14:02:33 vm1 cib[21074]:     info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=vm2/vm2/(null), version=0.5.1): ok (rc=0)
Apr  2 14:02:33 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:33 vm1 cib[21074]:     info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/31, version=0.5.2): ok (rc=0)
Apr  2 14:02:33 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 5c to 5e
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 5d to pending delivery queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 5e to pending delivery queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 5d
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 5d
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 5d
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 5d
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 5e
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 5e
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 5e
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 5e
Apr  2 14:02:33 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:33 vm1 cib[21074]:    debug: cib_process_readwrite: We are still in R/W mode
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 5f
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 5e to 5f
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 5f to pending delivery queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 5f
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 60
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 5f to 60
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 60 to pending delivery queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 5f
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 60
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 60
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 5f
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 60
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 5c
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 60 to 62
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 61 to pending delivery queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 62 to pending delivery queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 61
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 61
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 61
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 61
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 62
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 62
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 62
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 62
Apr  2 14:02:33 vm1 cib[21074]:     info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/33, version=0.5.4): ok (rc=0)
Apr  2 14:02:33 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:33 vm1 cib[21074]:    debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='224766144']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair)
Apr  2 14:02:33 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x16d5350
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 60
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 62 to 64
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 63 to pending delivery queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 64 to pending delivery queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 63
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 63
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 63
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 63
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 64
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 64
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 64
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 64
Apr  2 14:02:33 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 62
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 64 to 66
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 65 to pending delivery queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 66 to pending delivery queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 65
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 65
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 65
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 65
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 66
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 66
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 66
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 66
Apr  2 14:02:33 vm1 cib[21074]:    debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='224766144']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair)
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 64
Apr  2 14:02:33 vm1 attrd[21077]:    debug: attrd_cib_callback: Update 8 for probe_complete=true passed
Apr  2 14:02:33 vm1 attrd[21077]:    debug: attrd_cib_callback: Update 10 for probe_complete=true passed
Apr  2 14:02:33 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 66
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 66 to 68
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 67 to pending delivery queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 68 to pending delivery queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 67
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 67
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 67
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 67
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 68
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 68
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 68
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 68
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 68
Apr  2 14:02:33 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 68 to 6a
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 69 to pending delivery queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 6a to pending delivery queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 69
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 69
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 69
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 69
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 6a
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 6a
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 6a
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 6a
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 6a
Apr  2 14:02:33 vm1 cib[21074]:     info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/34, version=0.5.8): ok (rc=0)
Apr  2 14:02:33 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 6a to 6c
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 6b to pending delivery queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 6c to pending delivery queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 6b
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 6b
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 6b
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 6b
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 6c
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 6c
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 6c
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 6c
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 6c
Apr  2 14:02:33 vm1 cib[21074]:    debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//cluster_property_set//nvpair[@name='dc-version'] (/cib/configuration/crm_config/cluster_property_set/nvpair[1])
Apr  2 14:02:33 vm1 cib[21074]:     info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/36, version=0.5.9): ok (rc=0)
Apr  2 14:02:33 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 6c to 6e
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 6d to pending delivery queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 6e to pending delivery queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 6d
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 6d
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 6d
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 6d
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 6e
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 6e
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 6e
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 6e
Apr  2 14:02:33 vm1 cib[21074]:    debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//cluster_property_set//nvpair[@name='cluster-infrastructure'] (/cib/configuration/crm_config/cluster_property_set/nvpair[2])
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 6e
Apr  2 14:02:33 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_OFFER_ALL
Apr  2 14:02:33 vm1 crmd[21079]:    debug: initialize_join: join-2: Initializing join data (flag=true)
Apr  2 14:02:33 vm1 crmd[21079]:    debug: join_make_offer: join-2: Sending offer to vm1
Apr  2 14:02:33 vm1 crmd[21079]:    debug: join_make_offer: join-2: Sending offer to vm2
Apr  2 14:02:33 vm1 crmd[21079]:     info: do_dc_join_offer_all: join-2: Waiting on 2 outstanding join acks
Apr  2 14:02:33 vm1 crmd[21079]:    debug: do_pe_invoke_callback: Discarding PE request in state: S_INTEGRATION
Apr  2 14:02:33 vm1 crmd[21079]:    debug: config_query_callback: Call 30 : Parsing CIB options
Apr  2 14:02:33 vm1 crmd[21079]:    debug: config_query_callback: Shutdown escalation occurs after: 1200000ms
Apr  2 14:02:33 vm1 crmd[21079]:    debug: config_query_callback: Checking for expired actions every 900000ms
Apr  2 14:02:33 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:33 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:33 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17dd8f0
Apr  2 14:02:33 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17dd8f0
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 6e to 70
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 6f to pending delivery queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 70 to pending delivery queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 6f
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 6f
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 6f
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 6f
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 70
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 70
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 70
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 70
Apr  2 14:02:33 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:33 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:33 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:33 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:33 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:33 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:33 vm1 crmd[21079]:    debug: handle_request: Raising I_JOIN_OFFER: join-2
Apr  2 14:02:33 vm1 crmd[21079]:    debug: s_crmd_fsa: Processing I_JOIN_OFFER: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Apr  2 14:02:33 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_REQUEST
Apr  2 14:02:33 vm1 crmd[21079]:     info: update_dc: Set DC to vm1 (3.0.6)
Apr  2 14:02:33 vm1 crmd[21079]:    debug: do_cl_join_offer_respond: do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Apr  2 14:02:33 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 71
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 70 to 71
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 71 to pending delivery queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 71
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 72
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 71 to 72
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 72 to pending delivery queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 72
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 71
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 72
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 71
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 72
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 70
Apr  2 14:02:33 vm1 crmd[21079]:    debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 72
Apr  2 14:02:33 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_REQ
Apr  2 14:02:33 vm1 crmd[21079]:    debug: do_dc_join_filter_offer: Processing req from vm2
Apr  2 14:02:33 vm1 crmd[21079]:    debug: do_dc_join_filter_offer: join-2: Welcoming node vm2 (ref join_request-crmd-1333342954-6)
Apr  2 14:02:33 vm1 crmd[21079]:    debug: do_dc_join_filter_offer: 1 nodes have been integrated into join-2
Apr  2 14:02:33 vm1 crmd[21079]:    debug: check_join_state: Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Apr  2 14:02:33 vm1 crmd[21079]:    debug: do_dc_join_filter_offer: join-2: Still waiting on 1 outstanding offers
Apr  2 14:02:33 vm1 cib[21074]:     info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/38, version=0.5.10): ok (rc=0)
Apr  2 14:02:33 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 72 to 74
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 73 to pending delivery queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 74 to pending delivery queue
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 73
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 73
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 73
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 73
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 74
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 74
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 74
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 74
Apr  2 14:02:33 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:33 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 74
Apr  2 14:02:33 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:34 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 74 to 76
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 75 to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 76 to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 75
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 75
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 75
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 75
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 76
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 76
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 76
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 76
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 76
Apr  2 14:02:34 vm1 crmd[21079]:    debug: config_query_callback: Call 39 : Parsing CIB options
Apr  2 14:02:34 vm1 crmd[21079]:    debug: config_query_callback: Shutdown escalation occurs after: 1200000ms
Apr  2 14:02:34 vm1 crmd[21079]:    debug: config_query_callback: Checking for expired actions every 900000ms
Apr  2 14:02:34 vm1 crmd[21079]:    debug: join_query_callback: Respond to join offer join-2
Apr  2 14:02:34 vm1 crmd[21079]:    debug: join_query_callback: Acknowledging vm1 as our DC
Apr  2 14:02:34 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17dd8f0
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 76 to 77
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 77 to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 77
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 77
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 77
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 77
Apr  2 14:02:34 vm1 crmd[21079]:    debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_REQ
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_dc_join_filter_offer: Processing req from vm1
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_dc_join_filter_offer: vm1 has a better generation number than the current max vm2
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_dc_join_filter_offer: join-2: Welcoming node vm1 (ref join_request-crmd-1333342954-15)
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_dc_join_filter_offer: 2 nodes have been integrated into join-2
Apr  2 14:02:34 vm1 crmd[21079]:    debug: check_join_state: Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Apr  2 14:02:34 vm1 crmd[21079]:    debug: check_join_state: join-2: Integration of 2 peers complete: do_dc_join_filter_offer
Apr  2 14:02:34 vm1 crmd[21079]:    debug: s_crmd_fsa: Processing I_INTEGRATED: [ state=S_INTEGRATION cause=C_FSA_INTERNAL origin=check_join_state ]
Apr  2 14:02:34 vm1 crmd[21079]:   notice: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_state_transition: All 2 cluster nodes responded to the join offer.
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_START
Apr  2 14:02:34 vm1 crmd[21079]:    debug: crm_timer_start: Started Finalization Timer (I_ELECTION:1800000ms), src=48
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_FINALIZE
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_dc_join_finalize: Finializing join-2 for 2 clients
Apr  2 14:02:34 vm1 crmd[21079]:     info: do_dc_join_finalize: join-2: Syncing the CIB from vm1 to the rest of the cluster
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 77
Apr  2 14:02:34 vm1 cib[21074]:    debug: sync_our_cib: Syncing CIB to all peers
Apr  2 14:02:34 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 77 to 7a
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 78 to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 79 to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 7a to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 78
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 78
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 78
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 78
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 79
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 79
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 79
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 79
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 7a
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 7a
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 7a
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 7a
Apr  2 14:02:34 vm1 cib[21074]:     info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/41, version=0.5.11): ok (rc=0)
Apr  2 14:02:34 vm1 crmd[21079]:    debug: check_join_state: Invoked by finalize_sync_callback in state: S_FINALIZE_JOIN
Apr  2 14:02:34 vm1 crmd[21079]:    debug: check_join_state: join-2: Still waiting on 2 integrated nodes
Apr  2 14:02:34 vm1 crmd[21079]:    debug: finalize_sync_callback: Notifying 2 clients of join-2 results
Apr  2 14:02:34 vm1 crmd[21079]:    debug: finalize_join_for: join-2: ACK'ing join request from vm1, state member
Apr  2 14:02:34 vm1 crmd[21079]:    debug: finalize_join_for: join-2: ACK'ing join request from vm2, state member
Apr  2 14:02:34 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17dd8f0
Apr  2 14:02:34 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17dd8f0
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 7a
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 7a to 7c
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 7b to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 7c to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 7b
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 7b
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 7b
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 7b
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 7c
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 7c
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 7c
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 7c
Apr  2 14:02:34 vm1 crmd[21079]:    debug: handle_request: Raising I_JOIN_RESULT: join-2
Apr  2 14:02:34 vm1 crmd[21079]:    debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_cl_join_finalize_respond: Confirming join join-2: join_ack_nack
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_cl_join_finalize_respond: join-2: Join complete.  Sending local LRM status to vm1
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_dc_join_ack: Ignoring op=join_ack_nack message from vm1
Apr  2 14:02:34 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17dd8f0
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 7c to 7e
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 7d to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 7e to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 7d
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 7d
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 7d
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 7d
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 7e
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 7e
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 7e
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 7e
Apr  2 14:02:34 vm1 crmd[21079]:    debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Apr  2 14:02:34 vm1 crmd[21079]:     info: do_dc_join_ack: join-2: Updating node state to member for vm1
Apr  2 14:02:34 vm1 crmd[21079]:     info: erase_status_tag: Deleting xpath: //node_state[@uname='vm1']/lrm
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_dc_join_ack: join-2: Registered callback for LRM update 45
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 7f
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 7e to 7f
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 7f to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 7f
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 80
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 7f to 80
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 80 to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 81
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 80 to 81
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 81 to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 7f
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 80
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 81
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 80
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 81
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 7f
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 80
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 81
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 7c
Apr  2 14:02:34 vm1 crmd[21079]:    debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Apr  2 14:02:34 vm1 crmd[21079]:     info: do_dc_join_ack: join-2: Updating node state to member for vm2
Apr  2 14:02:34 vm1 crmd[21079]:     info: erase_status_tag: Deleting xpath: //node_state[@uname='vm2']/lrm
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_dc_join_ack: join-2: Registered callback for LRM update 47
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 81
Apr  2 14:02:34 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:34 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 81 to 83
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 82 to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 83 to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 82
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 82
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 82
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 82
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 83
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 83
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 83
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 83
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 83
Apr  2 14:02:34 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:34 vm1 cib[21074]:     info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/42, version=0.5.13): ok (rc=0)
Apr  2 14:02:34 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 83 to 85
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 84 to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 85 to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 84
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 84
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 84
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 84
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 85
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 85
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 85
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 85
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 85
Apr  2 14:02:34 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:34 vm1 cib[21074]:     info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/43, version=0.5.14): ok (rc=0)
Apr  2 14:02:34 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 85 to 87
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 86 to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 87 to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 86
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 86
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 86
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 86
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 87
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 87
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 87
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 87
Apr  2 14:02:34 vm1 cib[21074]:    debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='vm1']/lrm (/cib/status/node_state[1]/lrm)
Apr  2 14:02:34 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 87
Apr  2 14:02:34 vm1 cib[21074]:     info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vm1']/lrm (origin=local/crmd/44, version=0.5.15): ok (rc=0)
Apr  2 14:02:34 vm1 crmd[21079]:    debug: erase_xpath_callback: Deletion of "//node_state[@uname='vm1']/lrm": ok (rc=0)
Apr  2 14:02:34 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 87 to 89
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 88 to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 89 to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 88
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 88
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 88
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 88
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 89
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 89
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 89
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 89
Apr  2 14:02:34 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 89
Apr  2 14:02:34 vm1 crmd[21079]:    debug: join_update_complete_callback: Join update 45 complete
Apr  2 14:02:34 vm1 crmd[21079]:    debug: check_join_state: Invoked by join_update_complete_callback in state: S_FINALIZE_JOIN
Apr  2 14:02:34 vm1 crmd[21079]:    debug: check_join_state: join-2 complete: join_update_complete_callback
Apr  2 14:02:34 vm1 crmd[21079]:    debug: s_crmd_fsa: Processing I_FINALIZED: [ state=S_FINALIZE_JOIN cause=C_FSA_INTERNAL origin=check_join_state ]
Apr  2 14:02:34 vm1 crmd[21079]:   notice: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Apr  2 14:02:34 vm1 crmd[21079]:    debug: ghash_update_cib_node: Updating vm1: true (overwrite=true) hash_size=2
Apr  2 14:02:34 vm1 crmd[21079]:    debug: ghash_update_cib_node: Updating vm2: true (overwrite=true) hash_size=2
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_state_transition: All 2 cluster nodes are eligible to run resources.
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_FINAL
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_dc_join_final: Ensuring DC, quorum and node attributes are up-to-date
Apr  2 14:02:34 vm1 crmd[21079]:    debug: attrd_update_delegate: Sent update: (null)=(null) for localhost
Apr  2 14:02:34 vm1 crmd[21079]:    debug: crm_update_quorum: Updating quorum status to true (call=50)
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_TE_CANCEL
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_te_invoke: Cancelling the transition: inactive
Apr  2 14:02:34 vm1 crmd[21079]:     info: abort_transition_graph: do_te_invoke:162 - Triggered transition abort (complete=1) : Peer Cancelled
Apr  2 14:02:34 vm1 crmd[21079]:    debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_pe_invoke: Query 51: Requesting the current CIB: S_POLICY_ENGINE
Apr  2 14:02:34 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 89 to 8b
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 8a to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 8b to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 8a
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 8a
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 8a
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 8a
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 8b
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 8b
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 8b
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 8b
Apr  2 14:02:34 vm1 cib[21074]:    debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='vm2']/lrm (/cib/status/node_state[2]/lrm)
Apr  2 14:02:34 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:34 vm1 crmd[21079]:    debug: te_update_diff: Processing diff (cib_delete): 0.5.16 -> 0.5.17 (S_POLICY_ENGINE)
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 8b
Apr  2 14:02:34 vm1 cib[21074]:     info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vm2']/lrm (origin=local/crmd/46, version=0.5.17): ok (rc=0)
Apr  2 14:02:34 vm1 crmd[21079]:    debug: erase_xpath_callback: Deletion of "//node_state[@uname='vm2']/lrm": ok (rc=0)
Apr  2 14:02:34 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 8b to 8d
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 8c to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 8d to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 8c
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 8c
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 8c
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 8c
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 8d
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 8d
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 8d
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 8d
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 8d
Apr  2 14:02:34 vm1 cib[21074]:    debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='224766144']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair)
Apr  2 14:02:34 vm1 attrd[21077]:   notice: attrd_local_callback: Sending full refresh (origin=crmd)
Apr  2 14:02:34 vm1 attrd[21077]:   notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Apr  2 14:02:34 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:34 vm1 crmd[21079]:    debug: te_update_diff: Processing diff (cib_modify): 0.5.17 -> 0.5.18 (S_POLICY_ENGINE)
Apr  2 14:02:34 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 8d to 8f
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 8e to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 8f to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 8e
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 8e
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 8e
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 8e
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 8f
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 8f
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 8f
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 8f
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 8f
Apr  2 14:02:34 vm1 cib[21074]:    debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='224766144']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair)
Apr  2 14:02:34 vm1 attrd[21077]:    debug: attrd_cib_callback: Update 12 for probe_complete=true passed
Apr  2 14:02:34 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x16d5350
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 8f to 90
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 90 to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 90
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 90
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 90
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 90
Apr  2 14:02:34 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:34 vm1 crmd[21079]:    debug: te_update_diff: Processing diff (cib_modify): 0.5.18 -> 0.5.19 (S_POLICY_ENGINE)
Apr  2 14:02:34 vm1 attrd[21077]:    debug: attrd_cib_callback: Update 14 for probe_complete=true passed
Apr  2 14:02:34 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:34 vm1 crmd[21079]:    debug: te_update_diff: Processing diff (cib_modify): 0.5.19 -> 0.5.20 (S_POLICY_ENGINE)
Apr  2 14:02:34 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 90
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 90 to 92
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 91 to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 92 to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 91
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 91
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 91
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 91
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 92
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 92
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 92
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 92
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 93
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 92 to 93
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 93 to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 93
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 93
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 93
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 93
Apr  2 14:02:34 vm1 crmd[21079]:    debug: join_update_complete_callback: Join update 47 complete
Apr  2 14:02:34 vm1 crmd[21079]:    debug: check_join_state: Invoked by join_update_complete_callback in state: S_POLICY_ENGINE
Apr  2 14:02:34 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 93 to 95
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 94 to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 95 to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 94
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 94
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 94
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 94
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 95
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 95
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 95
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 95
Apr  2 14:02:34 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:34 vm1 crmd[21079]:    debug: te_update_diff: Processing diff (cib_modify): 0.5.20 -> 0.5.21 (S_POLICY_ENGINE)
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 95
Apr  2 14:02:34 vm1 cib[21074]:     info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/48, version=0.5.21): ok (rc=0)
Apr  2 14:02:34 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 95 to 97
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 96 to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 97 to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 96
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 96
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 96
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 96
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 97
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 97
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 97
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 97
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 97
Apr  2 14:02:34 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:34 vm1 crmd[21079]:    debug: te_update_diff: Processing diff (cib_modify): 0.5.21 -> 0.5.22 (S_POLICY_ENGINE)
Apr  2 14:02:34 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 97 to 99
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 98 to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 99 to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 98
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 98
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 98
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 98
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 99
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 99
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 99
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 99
Apr  2 14:02:34 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:34 vm1 crmd[21079]:    debug: te_update_diff: Processing diff (cib_modify): 0.5.22 -> 0.5.23 (S_POLICY_ENGINE)
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 99
Apr  2 14:02:34 vm1 cib[21074]:     info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/50, version=0.5.23): ok (rc=0)
Apr  2 14:02:34 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 99 to 9b
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 9a to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 9b to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 9a
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 9a
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 9a
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 9a
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 9b
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 9b
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 9b
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 9b
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_pe_invoke_callback: Invoking the PE: query=51, ref=pe_calc-dc-1333342954-19, seq=8, quorate=1
Apr  2 14:02:34 vm1 pengine[21078]:     info: unpack_config: Startup probes: enabled
Apr  2 14:02:34 vm1 pengine[21078]:    debug: unpack_config: STONITH timeout: 60000
Apr  2 14:02:34 vm1 pengine[21078]:    debug: unpack_config: STONITH of failed nodes is disabled
Apr  2 14:02:34 vm1 pengine[21078]:    debug: unpack_config: Stop all active resources: false
Apr  2 14:02:34 vm1 pengine[21078]:    debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Apr  2 14:02:34 vm1 pengine[21078]:    debug: unpack_config: Default stickiness: 0
Apr  2 14:02:34 vm1 pengine[21078]:   notice: unpack_config: On loss of CCM Quorum: Ignore
Apr  2 14:02:34 vm1 pengine[21078]:     info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Apr  2 14:02:34 vm1 pengine[21078]:  warning: unpack_nodes: Blind faith: not fencing unseen nodes
Apr  2 14:02:34 vm1 pengine[21078]:     info: unpack_domains: Unpacking domains
Apr  2 14:02:34 vm1 pengine[21078]:     info: determine_online_status: Node vm1 is online
Apr  2 14:02:34 vm1 pengine[21078]:     info: determine_online_status: Node vm2 is online
Apr  2 14:02:34 vm1 pengine[21078]:     info: native_print: prmDummy1	(ocf::pacemaker:Dummy):	Stopped 
Apr  2 14:02:34 vm1 pengine[21078]:    debug: native_assign_node: Assigning vm1 to prmDummy1
Apr  2 14:02:34 vm1 pengine[21078]:    debug: native_create_probe: Probing prmDummy1 on vm1 (Stopped)
Apr  2 14:02:34 vm1 pengine[21078]:    debug: native_create_probe: Probing prmDummy1 on vm2 (Stopped)
Apr  2 14:02:34 vm1 pengine[21078]:     info: RecurringOp:  Start recurring monitor (10s) for prmDummy1 on vm1
Apr  2 14:02:34 vm1 pengine[21078]:   notice: LogActions: Start   prmDummy1	(vm1)
Apr  2 14:02:34 vm1 crmd[21079]:    debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_LOG   
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_log: FSA: Input I_PE_SUCCESS from handle_response() received in state S_POLICY_ENGINE
Apr  2 14:02:34 vm1 crmd[21079]:   notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_TE_INVOKE
Apr  2 14:02:34 vm1 crmd[21079]:    debug: unpack_graph: Unpacked transition 1: 7 actions in 7 synapses
Apr  2 14:02:34 vm1 crmd[21079]:     info: do_te_invoke: Processing graph 1 (ref=pe_calc-dc-1333342954-19) derived from /var/lib/pengine/pe-input-1.bz2
Apr  2 14:02:34 vm1 crmd[21079]:     info: te_rsc_command: Initiating action 6: monitor prmDummy1_monitor_0 on vm2
Apr  2 14:02:34 vm1 crmd[21079]:     info: te_rsc_command: Initiating action 4: monitor prmDummy1_monitor_0 on vm1 (local)
Apr  2 14:02:34 vm1 lrmd: [21076]: debug: on_msg_add_rsc:client [21079] adds resource prmDummy1
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_lrm_rsc_op: Performing key=4:1:7:c7106bb4-e73e-4614-ab19-3be6a9a6cdab op=prmDummy1_monitor_0
Apr  2 14:02:34 vm1 lrmd: [21076]: debug: on_msg_perform_op:2400: copying parameters for rsc prmDummy1
Apr  2 14:02:34 vm1 lrmd: [21076]: debug: on_msg_perform_op: add an operation operation monitor[2] on prmDummy1 for client 21079, its parameters: crm_feature_set=[3.0.6] CRM_meta_timeout=[20000]  to the operation list.
Apr  2 14:02:34 vm1 lrmd: [21076]: info: rsc:prmDummy1 probe[2] (pid 21140)
Apr  2 14:02:34 vm1 crmd[21079]:    debug: run_graph: ==== Transition 1 (Complete=0, Pending=2, Fired=2, Skipped=0, Incomplete=5, Source=/var/lib/pengine/pe-input-1.bz2): In-progress
Apr  2 14:02:34 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17dd8f0
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 9b
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 9b to 9d
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 9c to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 9d to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 9c
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 9c
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 9c
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 9c
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 9d
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 9d
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 9d
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 9d
Apr  2 14:02:34 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:34 vm1 crmd[21079]:    debug: te_update_diff: Processing diff (cib_modify): 0.5.23 -> 0.5.24 (S_TRANSITION_ENGINE)
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 9d
Apr  2 14:02:34 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 9d to 9f
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 9e to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq 9f to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 9e
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 9e
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 9e
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 9e
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 9f
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 9f
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 9f
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq 9f
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including 9f
Apr  2 14:02:34 vm1 cib[21074]:    debug: xmlfromIPC: Peer disconnected
Apr  2 14:02:34 vm1 Dummy(prmDummy1)[21140]: DEBUG: prmDummy1 monitor : 7
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq a0
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering 9f to a0
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq a0 to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq a0
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq a1
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering a0 to a1
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq a1 to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq a0
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq a1
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq a1
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq a0
Apr  2 14:02:34 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:34 vm1 crmd[21079]:    debug: te_update_diff: Processing diff (cib_modify): 0.5.24 -> 0.5.25 (S_TRANSITION_ENGINE)
Apr  2 14:02:34 vm1 crmd[21079]:    debug: match_graph_event: Action prmDummy1_monitor_0 (6) confirmed on vm2 (rc=0)
Apr  2 14:02:34 vm1 crmd[21079]:     info: te_rsc_command: Initiating action 5: probe_complete probe_complete on vm2 - no waiting
Apr  2 14:02:34 vm1 crmd[21079]:    debug: run_graph: ==== Transition 1 (Complete=1, Pending=1, Fired=1, Skipped=0, Incomplete=4, Source=/var/lib/pengine/pe-input-1.bz2): In-progress
Apr  2 14:02:34 vm1 crmd[21079]:    debug: run_graph: ==== Transition 1 (Complete=2, Pending=1, Fired=0, Skipped=0, Incomplete=4, Source=/var/lib/pengine/pe-input-1.bz2): In-progress
Apr  2 14:02:34 vm1 pengine[21078]:   notice: process_pe_message: Transition 1: PEngine Input stored in: /var/lib/pengine/pe-input-1.bz2
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq a1
Apr  2 14:02:34 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17dd8f0
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including a1
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering a1 to a5
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq a2 to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq a3 to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq a4 to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq a5 to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq a2
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq a2
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq a2
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq a2
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq a3
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq a3
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq a3
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq a3
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq a4
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq a4
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq a4
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq a4
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq a5
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq a5
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq a5
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq a5
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including a5
Apr  2 14:02:34 vm1 lrmd: [21076]: WARN: Managed prmDummy1:monitor process 21140 exited with return code 7.
Apr  2 14:02:34 vm1 lrmd: [21076]: info: operation monitor[2] on prmDummy1 for client 21079: pid 21140 exited with return code 7
Apr  2 14:02:34 vm1 crmd[21079]:    debug: create_operation_update: do_update_resource: Updating resouce prmDummy1 after complete monitor op (interval=0)
Apr  2 14:02:34 vm1 crmd[21079]:    debug: get_rsc_restart_list: Attr state is not reloadable
Apr  2 14:02:34 vm1 crmd[21079]:     info: process_lrm_event: LRM operation prmDummy1_monitor_0 (call=2, rc=7, cib-update=52, confirmed=true) not running
Apr  2 14:02:34 vm1 crmd[21079]:    debug: update_history_cache: Appending monitor op to history for 'prmDummy1'
Apr  2 14:02:34 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering a5 to a8
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq a6 to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq a7 to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq a8 to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq a6
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq a6
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq a6
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq a6
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq a7
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq a7
Apr  2 14:02:34 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:34 vm1 crmd[21079]:    debug: te_update_diff: Processing diff (cib_modify): 0.5.25 -> 0.5.26 (S_TRANSITION_ENGINE)
Apr  2 14:02:34 vm1 crmd[21079]:    debug: match_graph_event: Action prmDummy1_monitor_0 (4) confirmed on vm1 (rc=0)
Apr  2 14:02:34 vm1 crmd[21079]:     info: te_rsc_command: Initiating action 3: probe_complete probe_complete on vm1 (local) - no waiting
Apr  2 14:02:34 vm1 attrd[21077]:     info: attrd_local_callback: DEBUG: [crmd,update,probe_complete,true,(null)],[vm1]
Apr  2 14:02:34 vm1 attrd[21077]:    debug: attrd_local_callback: update message from crmd: probe_complete=true
Apr  2 14:02:34 vm1 attrd[21077]:    debug: attrd_local_callback: Supplied: true, Current: true, Stored: true
Apr  2 14:02:34 vm1 crmd[21079]:    debug: attrd_update_delegate: Sent update: probe_complete=true for localhost
Apr  2 14:02:34 vm1 crmd[21079]:    debug: te_pseudo_action: Pseudo action 2 fired and confirmed
Apr  2 14:02:34 vm1 crmd[21079]:    debug: run_graph: ==== Transition 1 (Complete=3, Pending=0, Fired=2, Skipped=0, Incomplete=2, Source=/var/lib/pengine/pe-input-1.bz2): In-progress
Apr  2 14:02:34 vm1 crmd[21079]:     info: te_rsc_command: Initiating action 7: start prmDummy1_start_0 on vm1 (local)
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_lrm_rsc_op: Performing key=7:1:0:c7106bb4-e73e-4614-ab19-3be6a9a6cdab op=prmDummy1_start_0
Apr  2 14:02:34 vm1 lrmd: [21076]: debug: on_msg_perform_op:2400: copying parameters for rsc prmDummy1
Apr  2 14:02:34 vm1 lrmd: [21076]: debug: on_msg_perform_op: add an operation operation start[3] on prmDummy1 for client 21079, its parameters: CRM_meta_name=[start] crm_feature_set=[3.0.6] CRM_meta_on_fail=[restart] CRM_meta_timeout=[60000]  to the operation list.
Apr  2 14:02:34 vm1 lrmd: [21076]: info: rsc:prmDummy1 start[3] (pid 21151)
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq a7
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq a7
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq a8
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq a8
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq a8
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq a8
Apr  2 14:02:34 vm1 crmd[21079]:    debug: run_graph: ==== Transition 1 (Complete=5, Pending=1, Fired=1, Skipped=0, Incomplete=1, Source=/var/lib/pengine/pe-input-1.bz2): In-progress
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including a8
Apr  2 14:02:34 vm1 Dummy(prmDummy1)[21151]: DEBUG: prmDummy1 start : 0
Apr  2 14:02:34 vm1 lrmd: [21076]: info: Managed prmDummy1:start process 21151 exited with return code 0.
Apr  2 14:02:34 vm1 lrmd: [21076]: info: operation start[3] on prmDummy1 for client 21079: pid 21151 exited with return code 0
Apr  2 14:02:34 vm1 crmd[21079]:    debug: create_operation_update: do_update_resource: Updating resouce prmDummy1 after complete start op (interval=0)
Apr  2 14:02:34 vm1 crmd[21079]:     info: process_lrm_event: LRM operation prmDummy1_start_0 (call=3, rc=0, cib-update=53, confirmed=true) ok
Apr  2 14:02:34 vm1 crmd[21079]:    debug: update_history_cache: Appending start op to history for 'prmDummy1'
Apr  2 14:02:34 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:34 vm1 crmd[21079]:    debug: te_update_diff: Processing diff (cib_modify): 0.5.26 -> 0.5.27 (S_TRANSITION_ENGINE)
Apr  2 14:02:34 vm1 crmd[21079]:    debug: match_graph_event: Action prmDummy1_start_0 (7) confirmed on vm1 (rc=0)
Apr  2 14:02:34 vm1 crmd[21079]:     info: te_rsc_command: Initiating action 8: monitor prmDummy1_monitor_10000 on vm1 (local)
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering a8 to ab
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq a9 to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq aa to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq ab to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq a9
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq a9
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq a9
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq a9
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq aa
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq aa
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq aa
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq aa
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq ab
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq ab
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq ab
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq ab
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_lrm_rsc_op: Performing key=8:1:0:c7106bb4-e73e-4614-ab19-3be6a9a6cdab op=prmDummy1_monitor_10000
Apr  2 14:02:34 vm1 lrmd: [21076]: debug: on_msg_perform_op: add an operation operation monitor[4] on prmDummy1 for client 21079, its parameters: CRM_meta_name=[monitor] crm_feature_set=[3.0.6] CRM_meta_on_fail=[restart] CRM_meta_interval=[10000] CRM_meta_timeout=[60000]  to the operation list.
Apr  2 14:02:34 vm1 lrmd: [21076]: info: rsc:prmDummy1 monitor[4] (pid 21159)
Apr  2 14:02:34 vm1 crmd[21079]:    debug: run_graph: ==== Transition 1 (Complete=6, Pending=1, Fired=1, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-1.bz2): In-progress
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including ab
Apr  2 14:02:34 vm1 Dummy(prmDummy1)[21159]: DEBUG: prmDummy1 monitor : 0
Apr  2 14:02:34 vm1 lrmd: [21076]: info: Managed prmDummy1:monitor process 21159 exited with return code 0.
Apr  2 14:02:34 vm1 lrmd: [21076]: info: operation monitor[4] on prmDummy1 for client 21079: pid 21159 exited with return code 0
Apr  2 14:02:34 vm1 crmd[21079]:    debug: create_operation_update: do_update_resource: Updating resouce prmDummy1 after complete monitor op (interval=10000)
Apr  2 14:02:34 vm1 crmd[21079]:     info: process_lrm_event: LRM operation prmDummy1_monitor_10000 (call=4, rc=0, cib-update=54, confirmed=false) ok
Apr  2 14:02:34 vm1 crmd[21079]:    debug: update_history_cache: Appending monitor op to history for 'prmDummy1'
Apr  2 14:02:34 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering ab to ae
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq ac to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq ad to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq ae to pending delivery queue
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq ac
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq ac
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq ac
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq ac
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq ad
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq ad
Apr  2 14:02:34 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:34 vm1 crmd[21079]:    debug: te_update_diff: Processing diff (cib_modify): 0.5.27 -> 0.5.28 (S_TRANSITION_ENGINE)
Apr  2 14:02:34 vm1 crmd[21079]:    debug: match_graph_event: Action prmDummy1_monitor_10000 (8) confirmed on vm1 (rc=0)
Apr  2 14:02:34 vm1 crmd[21079]:   notice: run_graph: ==== Transition 1 (Complete=7, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-1.bz2): Complete
Apr  2 14:02:34 vm1 crmd[21079]:    debug: te_graph_trigger: Transition 1 is now complete
Apr  2 14:02:34 vm1 crmd[21079]:    debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE
Apr  2 14:02:34 vm1 crmd[21079]:    debug: notify_crmd: Transition 1 status: done - <null>
Apr  2 14:02:34 vm1 crmd[21079]:    debug: s_crmd_fsa: Processing I_TE_SUCCESS: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_LOG   
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_log: FSA: Input I_TE_SUCCESS from notify_crmd() received in state S_TRANSITION_ENGINE
Apr  2 14:02:34 vm1 crmd[21079]:   notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq ad
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq ad
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq ae
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq ae
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq ae
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq ae
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_state_transition: Starting PEngine Recheck Timer
Apr  2 14:02:34 vm1 crmd[21079]:    debug: crm_timer_start: Started PEngine Recheck Timer (I_PE_CALC:900000ms), src=65
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Apr  2 14:02:34 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr  2 14:02:34 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including ae
Apr  2 14:02:37 vm1 corosync[21054]:   [TOTEM ] totemrrp.c:1329 ring 1 active with no faults
Apr  2 14:02:44 vm1 lrmd: [21076]: debug: rsc:prmDummy1 monitor[4] (pid 21166)
Apr  2 14:02:44 vm1 Dummy(prmDummy1)[21166]: DEBUG: prmDummy1 monitor : 0
Apr  2 14:02:44 vm1 lrmd: [21076]: info: operation monitor[4] on prmDummy1 for client 21079: pid 21166 exited with return code 0
Apr  2 14:02:54 vm1 lrmd: [21076]: debug: rsc:prmDummy1 monitor[4] (pid 21175)
Apr  2 14:02:54 vm1 Dummy(prmDummy1)[21175]: DEBUG: prmDummy1 monitor : 7
Apr  2 14:02:54 vm1 lrmd: [21076]: info: operation monitor[4] on prmDummy1 for client 21079: pid 21175 exited with return code 7
Apr  2 14:02:54 vm1 crmd[21079]:    debug: create_operation_update: do_update_resource: Updating resouce prmDummy1 after complete monitor op (interval=10000)
Apr  2 14:02:54 vm1 crmd[21079]:     info: process_lrm_event: LRM operation prmDummy1_monitor_10000 (call=4, rc=7, cib-update=55, confirmed=false) not running
Apr  2 14:02:54 vm1 crmd[21079]:    debug: update_history_cache: Appending monitor op to history for 'prmDummy1'
Apr  2 14:02:54 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:54 vm1 crmd[21079]:    debug: te_update_diff: Processing diff (cib_modify): 0.5.28 -> 0.5.29 (S_IDLE)
Apr  2 14:02:54 vm1 crmd[21079]:     info: process_graph_event: Action prmDummy1_monitor_10000 arrived after a completed transition
Apr  2 14:02:54 vm1 crmd[21079]:     info: abort_transition_graph: process_graph_event:481 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=prmDummy1_last_failure_0, magic=0:7;8:1:0:c7106bb4-e73e-4614-ab19-3be6a9a6cdab, cib=0.5.29) : Inactive graph
Apr  2 14:02:54 vm1 crmd[21079]:  warning: update_failcount: Updating failcount for prmDummy1 on 224766144 after failed monitor: rc=7 (update=value++, time=1333342974)
Apr  2 14:02:54 vm1 crmd[21079]:    debug: attrd_update_delegate: Sent update: fail-count-prmDummy1=value++ for 224766144
Apr  2 14:02:54 vm1 crmd[21079]:    debug: attrd_update_delegate: Sent update: last-failure-prmDummy1=1333342974 for 224766144
Apr  2 14:02:54 vm1 crmd[21079]:    debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_IDLE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Apr  2 14:02:54 vm1 crmd[21079]:   notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Apr  2 14:02:54 vm1 crmd[21079]:    debug: do_state_transition: All 2 cluster nodes are eligible to run resources.
Apr  2 14:02:54 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr  2 14:02:54 vm1 attrd[21077]:     info: attrd_local_callback: DEBUG: [crmd,update,fail-count-prmDummy1,value++,224766144],[vm1]
Apr  2 14:02:54 vm1 attrd[21077]:     info: attrd_local_callback: DEBUG: [crmd,update,last-failure-prmDummy1,1333342974,224766144],[vm1]
Apr  2 14:02:54 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x16d5350
Apr  2 14:02:54 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x16d5350
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:54 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Apr  2 14:02:54 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr  2 14:02:54 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Apr  2 14:02:54 vm1 crmd[21079]:    debug: do_pe_invoke: Query 56: Requesting the current CIB: S_POLICY_ENGINE
Apr  2 14:02:54 vm1 pengine[21078]:     info: unpack_config: Startup probes: enabled
Apr  2 14:02:54 vm1 pengine[21078]:    debug: unpack_config: STONITH timeout: 60000
Apr  2 14:02:54 vm1 pengine[21078]:    debug: unpack_config: STONITH of failed nodes is disabled
Apr  2 14:02:54 vm1 pengine[21078]:    debug: unpack_config: Stop all active resources: false
Apr  2 14:02:54 vm1 pengine[21078]:    debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Apr  2 14:02:54 vm1 pengine[21078]:    debug: unpack_config: Default stickiness: 0
Apr  2 14:02:54 vm1 pengine[21078]:   notice: unpack_config: On loss of CCM Quorum: Ignore
Apr  2 14:02:54 vm1 pengine[21078]:     info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Apr  2 14:02:54 vm1 pengine[21078]:  warning: unpack_nodes: Blind faith: not fencing unseen nodes
Apr  2 14:02:54 vm1 pengine[21078]:     info: unpack_domains: Unpacking domains
Apr  2 14:02:54 vm1 pengine[21078]:     info: determine_online_status: Node vm1 is online
Apr  2 14:02:54 vm1 crmd[21079]:    debug: do_pe_invoke_callback: Invoking the PE: query=56, ref=pe_calc-dc-1333342974-26, seq=8, quorate=1
Apr  2 14:02:54 vm1 pengine[21078]:     info: determine_online_status: Node vm2 is online
Apr  2 14:02:54 vm1 pengine[21078]:    debug: unpack_rsc_op: prmDummy1_last_failure_0 on vm1 returned 7 (not running) instead of the expected value: 0 (ok)
Apr  2 14:02:54 vm1 pengine[21078]:  warning: unpack_rsc_op: Processing failed op prmDummy1_last_failure_0 on vm1: not running (7)
Apr  2 14:02:54 vm1 pengine[21078]:     info: native_print: prmDummy1	(ocf::pacemaker:Dummy):	Started vm1 FAILED
Apr  2 14:02:54 vm1 pengine[21078]:    debug: common_apply_stickiness: Resource prmDummy1: preferring current location (node=vm1, weight=1000000)
Apr  2 14:02:54 vm1 pengine[21078]:    debug: native_assign_node: Assigning vm1 to prmDummy1
Apr  2 14:02:54 vm1 pengine[21078]:     info: RecurringOp:  Start recurring monitor (10s) for prmDummy1 on vm1
Apr  2 14:02:54 vm1 pengine[21078]:   notice: LogActions: Recover prmDummy1	(Started vm1)
Apr  2 14:02:54 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering ae to b2
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq af to pending delivery queue
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq b0 to pending delivery queue
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq b1 to pending delivery queue
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq b2 to pending delivery queue
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq af
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq af
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq af
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq af
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq b0
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq b0
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq b0
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq b0
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq b1
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq b1
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq b1
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq b1
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq b2
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq b2
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq b2
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq b2
Apr  2 14:02:54 vm1 crmd[21079]:    debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Apr  2 14:02:54 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_LOG   
Apr  2 14:02:54 vm1 crmd[21079]:    debug: do_log: FSA: Input I_PE_SUCCESS from handle_response() received in state S_POLICY_ENGINE
Apr  2 14:02:54 vm1 crmd[21079]:   notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Apr  2 14:02:54 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr  2 14:02:54 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Apr  2 14:02:54 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr  2 14:02:54 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_TE_INVOKE
Apr  2 14:02:54 vm1 crmd[21079]:    debug: unpack_graph: Unpacked transition 2: 4 actions in 4 synapses
Apr  2 14:02:54 vm1 crmd[21079]:     info: do_te_invoke: Processing graph 2 (ref=pe_calc-dc-1333342974-26) derived from /var/lib/pengine/pe-input-2.bz2
Apr  2 14:02:54 vm1 crmd[21079]:     info: te_rsc_command: Initiating action 2: stop prmDummy1_stop_0 on vm1 (local)
Apr  2 14:02:54 vm1 crmd[21079]:    debug: cancel_op: Cancelling op 4 for prmDummy1 (prmDummy1:4)
Apr  2 14:02:54 vm1 lrmd: [21076]: info: cancel_op: operation monitor[4] on prmDummy1 for client 21079, its parameters: CRM_meta_name=[monitor] crm_feature_set=[3.0.6] CRM_meta_on_fail=[restart] CRM_meta_interval=[10000] CRM_meta_timeout=[60000]  cancelled
Apr  2 14:02:54 vm1 lrmd: [21076]: debug: on_msg_cancel_op: operation 4 cancelled
Apr  2 14:02:54 vm1 crmd[21079]:    debug: cancel_op: Op 4 for prmDummy1 (prmDummy1:4): cancelled
Apr  2 14:02:54 vm1 crmd[21079]:    debug: do_lrm_rsc_op: Performing key=2:2:0:c7106bb4-e73e-4614-ab19-3be6a9a6cdab op=prmDummy1_stop_0
Apr  2 14:02:54 vm1 lrmd: [21076]: debug: on_msg_perform_op: add an operation operation stop[5] on prmDummy1 for client 21079, its parameters: CRM_meta_name=[stop] crm_feature_set=[3.0.6] CRM_meta_on_fail=[block] CRM_meta_timeout=[60000]  to the operation list.
Apr  2 14:02:54 vm1 lrmd: [21076]: info: rsc:prmDummy1 stop[5] (pid 21182)
Apr  2 14:02:54 vm1 crmd[21079]:    debug: run_graph: ==== Transition 2 (Complete=0, Pending=1, Fired=1, Skipped=0, Incomplete=3, Source=/var/lib/pengine/pe-input-2.bz2): In-progress
Apr  2 14:02:54 vm1 crmd[21079]:     info: process_lrm_event: LRM operation prmDummy1_monitor_10000 (call=4, status=1, cib-update=0, confirmed=true) Cancelled
Apr  2 14:02:54 vm1 crmd[21079]:    debug: update_history_cache: Appending monitor op to history for 'prmDummy1'
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including b2
Apr  2 14:02:54 vm1 Dummy(prmDummy1)[21182]: DEBUG: prmDummy1 stop : 0
Apr  2 14:02:54 vm1 lrmd: [21076]: info: Managed prmDummy1:stop process 21182 exited with return code 0.
Apr  2 14:02:54 vm1 lrmd: [21076]: info: operation stop[5] on prmDummy1 for client 21079: pid 21182 exited with return code 0
Apr  2 14:02:54 vm1 crmd[21079]:    debug: create_operation_update: do_update_resource: Updating resouce prmDummy1 after complete stop op (interval=0)
Apr  2 14:02:54 vm1 crmd[21079]:     info: process_lrm_event: LRM operation prmDummy1_stop_0 (call=5, rc=0, cib-update=57, confirmed=true) ok
Apr  2 14:02:54 vm1 crmd[21079]:    debug: update_history_cache: Appending stop op to history for 'prmDummy1'
Apr  2 14:02:54 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:54 vm1 crmd[21079]:    debug: te_update_diff: Processing diff (cib_modify): 0.5.29 -> 0.5.30 (S_TRANSITION_ENGINE)
Apr  2 14:02:54 vm1 crmd[21079]:    debug: match_graph_event: Action prmDummy1_stop_0 (2) confirmed on vm1 (rc=0)
Apr  2 14:02:54 vm1 crmd[21079]:     info: te_rsc_command: Initiating action 7: start prmDummy1_start_0 on vm1 (local)
Apr  2 14:02:54 vm1 crmd[21079]:    debug: do_lrm_rsc_op: Performing key=7:2:0:c7106bb4-e73e-4614-ab19-3be6a9a6cdab op=prmDummy1_start_0
Apr  2 14:02:54 vm1 lrmd: [21076]: debug: on_msg_perform_op:2400: copying parameters for rsc prmDummy1
Apr  2 14:02:54 vm1 lrmd: [21076]: debug: on_msg_perform_op: add an operation operation start[6] on prmDummy1 for client 21079, its parameters: CRM_meta_name=[start] crm_feature_set=[3.0.6] CRM_meta_on_fail=[restart] CRM_meta_timeout=[60000]  to the operation list.
Apr  2 14:02:54 vm1 pengine[21078]:   notice: process_pe_message: Transition 2: PEngine Input stored in: /var/lib/pengine/pe-input-2.bz2
Apr  2 14:02:54 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering b2 to b5
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq b3 to pending delivery queue
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq b4 to pending delivery queue
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq b5 to pending delivery queue
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq b3
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq b3
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq b3
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq b3
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq b4
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq b4
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq b4
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq b4
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq b5
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq b5
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq b5
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq b5
Apr  2 14:02:54 vm1 lrmd: [21076]: info: rsc:prmDummy1 start[6] (pid 21189)
Apr  2 14:02:54 vm1 crmd[21079]:    debug: te_pseudo_action: Pseudo action 3 fired and confirmed
Apr  2 14:02:54 vm1 crmd[21079]:    debug: run_graph: ==== Transition 2 (Complete=1, Pending=1, Fired=2, Skipped=0, Incomplete=1, Source=/var/lib/pengine/pe-input-2.bz2): In-progress
Apr  2 14:02:54 vm1 crmd[21079]:    debug: run_graph: ==== Transition 2 (Complete=2, Pending=1, Fired=0, Skipped=0, Incomplete=1, Source=/var/lib/pengine/pe-input-2.bz2): In-progress
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including b5
Apr  2 14:02:54 vm1 Dummy(prmDummy1)[21189]: DEBUG: prmDummy1 start : 0
Apr  2 14:02:54 vm1 lrmd: [21076]: info: Managed prmDummy1:start process 21189 exited with return code 0.
Apr  2 14:02:54 vm1 lrmd: [21076]: info: operation start[6] on prmDummy1 for client 21079: pid 21189 exited with return code 0
Apr  2 14:02:54 vm1 crmd[21079]:    debug: create_operation_update: do_update_resource: Updating resouce prmDummy1 after complete start op (interval=0)
Apr  2 14:02:54 vm1 crmd[21079]:    debug: get_rsc_restart_list: Attr state is not reloadable
Apr  2 14:02:54 vm1 crmd[21079]:     info: process_lrm_event: LRM operation prmDummy1_start_0 (call=6, rc=0, cib-update=58, confirmed=true) ok
Apr  2 14:02:54 vm1 crmd[21079]:    debug: update_history_cache: Appending start op to history for 'prmDummy1'
Apr  2 14:02:54 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:54 vm1 crmd[21079]:    debug: te_update_diff: Processing diff (cib_modify): 0.5.30 -> 0.5.31 (S_TRANSITION_ENGINE)
Apr  2 14:02:54 vm1 crmd[21079]:    debug: match_graph_event: Action prmDummy1_start_0 (7) confirmed on vm1 (rc=0)
Apr  2 14:02:54 vm1 crmd[21079]:     info: te_rsc_command: Initiating action 1: monitor prmDummy1_monitor_10000 on vm1 (local)
Apr  2 14:02:54 vm1 crmd[21079]:    debug: do_lrm_rsc_op: Performing key=1:2:0:c7106bb4-e73e-4614-ab19-3be6a9a6cdab op=prmDummy1_monitor_10000
Apr  2 14:02:54 vm1 lrmd: [21076]: debug: on_msg_perform_op: add an operation operation monitor[7] on prmDummy1 for client 21079, its parameters: CRM_meta_name=[monitor] crm_feature_set=[3.0.6] CRM_meta_on_fail=[restart] CRM_meta_interval=[10000] CRM_meta_timeout=[60000]  to the operation list.
Apr  2 14:02:54 vm1 lrmd: [21076]: info: rsc:prmDummy1 monitor[7] (pid 21201)
Apr  2 14:02:54 vm1 crmd[21079]:    debug: run_graph: ==== Transition 2 (Complete=3, Pending=1, Fired=1, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-2.bz2): In-progress
Apr  2 14:02:54 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering b5 to b8
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq b6 to pending delivery queue
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq b7 to pending delivery queue
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq b8 to pending delivery queue
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq b6
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq b6
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq b6
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq b6
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq b7
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq b7
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq b7
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq b7
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq b8
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq b8
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq b8
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq b8
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including b8
Apr  2 14:02:54 vm1 Dummy(prmDummy1)[21201]: DEBUG: prmDummy1 monitor : 0
Apr  2 14:02:54 vm1 lrmd: [21076]: info: Managed prmDummy1:monitor process 21201 exited with return code 0.
Apr  2 14:02:54 vm1 lrmd: [21076]: info: operation monitor[7] on prmDummy1 for client 21079: pid 21201 exited with return code 0
Apr  2 14:02:54 vm1 crmd[21079]:    debug: create_operation_update: do_update_resource: Updating resouce prmDummy1 after complete monitor op (interval=10000)
Apr  2 14:02:54 vm1 crmd[21079]:     info: process_lrm_event: LRM operation prmDummy1_monitor_10000 (call=7, rc=0, cib-update=59, confirmed=false) ok
Apr  2 14:02:54 vm1 crmd[21079]:    debug: update_history_cache: Appending monitor op to history for 'prmDummy1'
Apr  2 14:02:54 vm1 corosync[21054]:   [CPG   ] cpg.c:1683 got mcast request on 0x17d95c0
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2302 mcasted message added to pending queue
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3766 Delivering b8 to bb
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq b9 to pending delivery queue
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq ba to pending delivery queue
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3835 Delivering MCAST message with seq bb to pending delivery queue
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq b9
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq b9
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq b9
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq b9
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq ba
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq ba
Apr  2 14:02:54 vm1 crmd[21079]:    debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Apr  2 14:02:54 vm1 crmd[21079]:    debug: te_update_diff: Processing diff (cib_modify): 0.5.31 -> 0.5.32 (S_TRANSITION_ENGINE)
Apr  2 14:02:54 vm1 crmd[21079]:    debug: match_graph_event: Action prmDummy1_monitor_10000 (1) confirmed on vm1 (rc=0)
Apr  2 14:02:54 vm1 crmd[21079]:   notice: run_graph: ==== Transition 2 (Complete=4, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-2.bz2): Complete
Apr  2 14:02:54 vm1 crmd[21079]:    debug: te_graph_trigger: Transition 2 is now complete
Apr  2 14:02:54 vm1 crmd[21079]:    debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE
Apr  2 14:02:54 vm1 crmd[21079]:    debug: notify_crmd: Transition 2 status: done - <null>
Apr  2 14:02:54 vm1 crmd[21079]:    debug: s_crmd_fsa: Processing I_TE_SUCCESS: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
Apr  2 14:02:54 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_LOG   
Apr  2 14:02:54 vm1 crmd[21079]:    debug: do_log: FSA: Input I_TE_SUCCESS from notify_crmd() received in state S_TRANSITION_ENGINE
Apr  2 14:02:54 vm1 crmd[21079]:   notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq ba
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq ba
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq bb
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq bb
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq bb
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:3927 Received ringid(192.168.101.141:8) seq bb
Apr  2 14:02:54 vm1 crmd[21079]:    debug: do_state_transition: Starting PEngine Recheck Timer
Apr  2 14:02:54 vm1 crmd[21079]:    debug: crm_timer_start: Started PEngine Recheck Timer (I_PE_CALC:900000ms), src=74
Apr  2 14:02:54 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Apr  2 14:02:54 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Apr  2 14:02:54 vm1 crmd[21079]:    debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Apr  2 14:02:54 vm1 corosync[21054]:   [TOTEM ] totemsrp.c:2425 releasing messages up to and including bb
Apr  2 14:03:04 vm1 lrmd: [21076]: debug: rsc:prmDummy1 monitor[7] (pid 21209)
Apr  2 14:03:04 vm1 Dummy(prmDummy1)[21209]: DEBUG: prmDummy1 monitor : 0
Apr  2 14:03:04 vm1 lrmd: [21076]: info: operation monitor[7] on prmDummy1 for client 21079: pid 21209 exited with return code 0


More information about the Pacemaker mailing list