logd[12078]: 2008/06/13_14:04:19 info: logd started with /etc/logd.cf.
logd[12085]: 2008/06/13_14:04:19 info: G_main_add_SignalHandler: Added signal handler for signal 15
logd[12078]: 2008/06/13_14:04:19 info: G_main_add_SignalHandler: Added signal handler for signal 15
heartbeat[12112]: 2008/06/13_14:04:19 info: Enabling logging daemon 
heartbeat[12112]: 2008/06/13_14:04:19 info: logfile and debug file are those specified in logd config file (default /etc/logd.cf)
heartbeat[12112]: 2008/06/13_14:04:19 debug: add_option(keepalive,2)
heartbeat[12112]: 2008/06/13_14:04:19 debug: add_option(deadtime,30)
heartbeat[12112]: 2008/06/13_14:04:19 debug: add_option(initdead,30)
heartbeat[12112]: 2008/06/13_14:04:19 debug: add_option(warntime,20)
heartbeat[12112]: 2008/06/13_14:04:19 debug: add_option(udpport,694)
heartbeat[12112]: 2008/06/13_14:04:19 debug: add_option(bcast,eth2)
heartbeat[12112]: 2008/06/13_14:04:19 debug: add_option(node,node-a)
heartbeat[12112]: 2008/06/13_14:04:19 debug: add_option(node,node-b)
heartbeat[12112]: 2008/06/13_14:04:19 debug: uid=hacluster, gid=<null>
heartbeat[12112]: 2008/06/13_14:04:19 debug: uid=hacluster, gid=<null>
heartbeat[12112]: 2008/06/13_14:04:19 debug: uid=<null>, gid=haclient
heartbeat[12112]: 2008/06/13_14:04:19 debug: uid=root, gid=<null>
heartbeat[12112]: 2008/06/13_14:04:19 debug: uid=<null>, gid=haclient
heartbeat[12112]: 2008/06/13_14:04:19 debug: Beginning authentication parsing
heartbeat[12112]: 2008/06/13_14:04:19 debug: 16 max authentication methods
heartbeat[12112]: 2008/06/13_14:04:19 debug: Keyfile opened
heartbeat[12112]: 2008/06/13_14:04:19 debug: Keyfile perms OK
heartbeat[12112]: 2008/06/13_14:04:19 debug: 16 max authentication methods
heartbeat[12112]: 2008/06/13_14:04:19 debug: Found authentication method [sha1]
heartbeat[12112]: 2008/06/13_14:04:19 info: AUTH: i=1: key = 0x992318, auth=0x2aaaabd7b630, authname=sha1
heartbeat[12112]: 2008/06/13_14:04:19 debug: Outbound signing method is 1
heartbeat[12112]: 2008/06/13_14:04:19 debug: Authentication parsing complete [1]
heartbeat[12112]: 2008/06/13_14:04:19 debug: add_option(cluster,linux-ha)
heartbeat[12112]: 2008/06/13_14:04:19 debug: add_option(hopfudge,1)
heartbeat[12112]: 2008/06/13_14:04:19 debug: add_option(baud,19200)
heartbeat[12112]: 2008/06/13_14:04:19 debug: add_option(auto_failback,legacy)
heartbeat[12112]: 2008/06/13_14:04:19 debug: add_option(hbgenmethod,file)
heartbeat[12112]: 2008/06/13_14:04:19 debug: add_option(realtime,true)
heartbeat[12112]: 2008/06/13_14:04:19 debug: add_option(msgfmt,classic)
heartbeat[12112]: 2008/06/13_14:04:19 debug: add_option(conn_logd_time,60)
heartbeat[12112]: 2008/06/13_14:04:19 debug: add_option(log_badpack,true)
heartbeat[12112]: 2008/06/13_14:04:19 debug: add_option(syslogmsgfmt,false)
heartbeat[12112]: 2008/06/13_14:04:19 debug: add_option(coredumps,true)
heartbeat[12112]: 2008/06/13_14:04:19 debug: add_option(autojoin,none)
heartbeat[12112]: 2008/06/13_14:04:19 debug: add_option(uuidfrom,file)
heartbeat[12112]: 2008/06/13_14:04:19 debug: add_option(compression,zlib)
heartbeat[12112]: 2008/06/13_14:04:19 debug: add_option(compression_threshold,2)
heartbeat[12112]: 2008/06/13_14:04:19 debug: add_option(traditional_compression,no)
heartbeat[12112]: 2008/06/13_14:04:19 debug: add_option(max_rexmit_delay,250)
heartbeat[12112]: 2008/06/13_14:04:19 debug: Setting max_rexmit_delay to 250 ms
heartbeat[12112]: 2008/06/13_14:04:19 debug: add_option(record_config_changes,on)
heartbeat[12112]: 2008/06/13_14:04:19 debug: add_option(record_pengine_inputs,on)
heartbeat[12112]: 2008/06/13_14:04:19 debug: add_option(enable_config_writes,on)
heartbeat[12112]: 2008/06/13_14:04:19 debug: add_option(memreserve,6500)
heartbeat[12112]: 2008/06/13_14:04:19 info: **************************
heartbeat[12112]: 2008/06/13_14:04:19 info: Configuration validated. Starting heartbeat 2.2.0
heartbeat[12112]: 2008/06/13_14:04:19 debug: HA configuration OK.  Heartbeat starting.
heartbeat[12113]: 2008/06/13_14:04:19 info: heartbeat: version 2.2.0
heartbeat[12113]: 2008/06/13_14:04:19 info: Heartbeat generation: 1209705553
heartbeat[12113]: 2008/06/13_14:04:19 debug: uuid is:db8f2da4-a7fb-40bf-bf14-befe4af11db7
heartbeat[12113]: 2008/06/13_14:04:19 debug: FIFO process pid: 12116
heartbeat[12113]: 2008/06/13_14:04:19 debug: opening bcast eth2 (UDP/IP broadcast)
heartbeat[12113]: 2008/06/13_14:04:19 debug: glib: SO_BINDTODEVICE(r) set for device eth2
heartbeat[12113]: 2008/06/13_14:04:19 info: glib: UDP Broadcast heartbeat started on port 694 (694) interface eth2
heartbeat[12113]: 2008/06/13_14:04:19 debug: write process pid: 12117
heartbeat[12113]: 2008/06/13_14:04:19 debug: read child process pid: 12118
heartbeat[12113]: 2008/06/13_14:04:19 info: glib: UDP Broadcast heartbeat closed on port 694 interface eth2 - Status: 1
heartbeat[12113]: 2008/06/13_14:04:19 debug: make_io_childpair: CREATED childpair wchan socket 10
heartbeat[12113]: 2008/06/13_14:04:19 debug: make_io_childpair: CREATED childpair rchan socket 12
heartbeat[12113]: 2008/06/13_14:04:19 info: G_main_add_TriggerHandler: Added signal manual handler
heartbeat[12113]: 2008/06/13_14:04:19 info: G_main_add_TriggerHandler: Added signal manual handler
heartbeat[12113]: 2008/06/13_14:04:19 info: G_main_add_SignalHandler: Added signal handler for signal 17
heartbeat[12113]: 2008/06/13_14:04:19 debug: Limiting CPU: 42 CPU seconds every 60000 milliseconds
heartbeat[12118]: 2008/06/13_14:04:19 debug: pid 12118 locked in memory.
heartbeat[12118]: 2008/06/13_14:04:19 debug: Limiting CPU: 6 CPU seconds every 60000 milliseconds
heartbeat[12113]: 2008/06/13_14:04:19 debug: pid 12113 locked in memory.
heartbeat[12113]: 2008/06/13_14:04:19 debug: Waiting for child processes to start
heartbeat[12113]: 2008/06/13_14:04:19 info: Local status now set to: 'up'
heartbeat[12113]: 2008/06/13_14:04:19 debug: All your child process are belong to us
heartbeat[12113]: 2008/06/13_14:04:19 debug: Starting local status message @ 2000 ms intervals
heartbeat[12113]: 2008/06/13_14:04:19 debug: Forking temp process write_hostcachedata
heartbeat[12113]: 2008/06/13_14:04:19 info: Managed write_hostcachedata process 12119 exited with return code 0.
heartbeat[12116]: 2008/06/13_14:04:20 debug: pid 12116 locked in memory.
heartbeat[12116]: 2008/06/13_14:04:20 debug: Limiting CPU: 6 CPU seconds every 60000 milliseconds
heartbeat[12117]: 2008/06/13_14:04:20 debug: pid 12117 locked in memory.
heartbeat[12117]: 2008/06/13_14:04:20 debug: Limiting CPU: 24 CPU seconds every 60000 milliseconds
heartbeat[12113]: 2008/06/13_14:04:20 info: Link node-a:eth2 up.
heartbeat[12113]: 2008/06/13_14:04:20 debug: sending reqnodes msg to node node-a
heartbeat[12113]: 2008/06/13_14:04:20 info: Status update for node node-a: status up
heartbeat[12113]: 2008/06/13_14:04:20 debug: Status seqno: 2 msgtime: 1213333458
heartbeat[12113]: 2008/06/13_14:04:20 info: Link node-b:eth2 up.
heartbeat[12113]: 2008/06/13_14:04:20 debug: Forking temp process write_hostcachedata
heartbeat[12113]: 2008/06/13_14:04:20 info: Managed write_hostcachedata process 12120 exited with return code 0.
heartbeat[12113]: 2008/06/13_14:04:20 debug: Get a reqnodes message from node-a
heartbeat[12113]: 2008/06/13_14:04:20 debug: get_delnodelist: delnodelist= 
heartbeat[12113]: 2008/06/13_14:04:20 debug: Get a repnodes msg from node-a
heartbeat[12113]: 2008/06/13_14:04:20 debug: nodelist received:node-a node-b 
heartbeat[12113]: 2008/06/13_14:04:20 info: Comm_now_up(): updating status to active
heartbeat[12113]: 2008/06/13_14:04:20 info: Local status now set to: 'active'
heartbeat[12113]: 2008/06/13_14:04:20 info: Starting child client "/usr/lib64/heartbeat/ccm" (90,90)
heartbeat[12113]: 2008/06/13_14:04:20 info: Starting child client "/usr/lib64/heartbeat/cib" (90,90)
heartbeat[12113]: 2008/06/13_14:04:20 info: Starting child client "/usr/lib64/heartbeat/lrmd -r" (0,0)
heartbeat[12113]: 2008/06/13_14:04:20 info: Starting child client "/usr/lib64/heartbeat/stonithd" (0,0)
heartbeat[12113]: 2008/06/13_14:04:20 info: Starting child client "/usr/lib64/heartbeat/attrd" (90,90)
heartbeat[12113]: 2008/06/13_14:04:20 info: Starting child client "/usr/lib64/heartbeat/crmd" (90,90)
heartbeat[12113]: 2008/06/13_14:04:20 debug: Forking temp process write_hostcachedata
heartbeat[12113]: 2008/06/13_14:04:20 debug: Forking temp process write_delcachedata
heartbeat[12121]: 2008/06/13_14:04:20 info: Starting "/usr/lib64/heartbeat/ccm" as uid 90  gid 90 (pid 12121)
heartbeat[12113]: 2008/06/13_14:04:20 info: Managed write_delcachedata process 12128 exited with return code 0.
heartbeat[12126]: 2008/06/13_14:04:20 info: Starting "/usr/lib64/heartbeat/crmd" as uid 90  gid 90 (pid 12126)
heartbeat[12113]: 2008/06/13_14:04:20 debug: APIregistration_dispatch() {
ccm[12121]: 2008/06/13_14:04:20 debug: Signing in with Heartbeat
heartbeat[12113]: 2008/06/13_14:04:20 debug: process_registerevent() {
heartbeat[12113]: 2008/06/13_14:04:20 debug: client->gsource = 0x9a1e68
heartbeat[12113]: 2008/06/13_14:04:20 debug: }/*process_registerevent*/;
heartbeat[12113]: 2008/06/13_14:04:20 debug: }/*APIregistration_dispatch*/;
heartbeat[12113]: 2008/06/13_14:04:20 debug: Checking client authorization for client ccm (90:90)
heartbeat[12113]: 2008/06/13_14:04:20 debug: create_seq_snapshot_table:no missing packets found for node node-a
heartbeat[12113]: 2008/06/13_14:04:20 debug: create_seq_snapshot_table:no missing packets found for node node-b
heartbeat[12113]: 2008/06/13_14:04:20 debug: Signing on API client 12121 (ccm)
heartbeat[12122]: 2008/06/13_14:04:20 info: Starting "/usr/lib64/heartbeat/cib" as uid 90  gid 90 (pid 12122)
heartbeat[12123]: 2008/06/13_14:04:20 info: Starting "/usr/lib64/heartbeat/lrmd -r" as uid 0  gid 0 (pid 12123)
crmd[12126]: 2008/06/13_14:04:20 info: main: CRM Hg Version: 32a830e35466 tip

crmd[12126]: 2008/06/13_14:04:20 info: crmd_init: Starting crmd
crmd[12126]: 2008/06/13_14:04:20 debug: register_fsa_input_adv: crmd_init appended FSA input 1 (I_STARTUP) (cause=C_STARTUP) without data
crmd[12126]: 2008/06/13_14:04:20 debug: s_crmd_fsa: Processing I_STARTUP: [ state=S_STARTING cause=C_STARTUP origin=crmd_init ]
crmd[12126]: 2008/06/13_14:04:20 debug: do_fsa_action: actions:trace: 	// A_LOG   
crmd[12126]: 2008/06/13_14:04:20 debug: do_fsa_action: actions:trace: 	// A_STARTUP
crmd[12126]: 2008/06/13_14:04:20 debug: do_startup: Registering Signal Handlers
crmd[12126]: 2008/06/13_14:04:20 info: G_main_add_SignalHandler: Added signal handler for signal 15
heartbeat[12125]: 2008/06/13_14:04:20 info: Starting "/usr/lib64/heartbeat/attrd" as uid 90  gid 90 (pid 12125)
crmd[12126]: 2008/06/13_14:04:20 info: G_main_add_TriggerHandler: Added signal manual handler
crmd[12126]: 2008/06/13_14:04:20 debug: do_startup: Creating CIB and LRM objects
crmd[12126]: 2008/06/13_14:04:20 info: G_main_add_SignalHandler: Added signal handler for signal 17
crmd[12126]: 2008/06/13_14:04:20 debug: do_fsa_action: actions:trace: 	// A_CIB_START
crmd[12126]: 2008/06/13_14:04:20 debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/heartbeat/crm/cib_rw
crmd[12126]: 2008/06/13_14:04:20 debug: init_client_ipc_comms_nodispatch: Could not init comms on: /var/run/heartbeat/crm/cib_rw
crmd[12126]: 2008/06/13_14:04:20 debug: cib_native_signon: Connection to command channel failed
crmd[12126]: 2008/06/13_14:04:20 debug: cib_native_signon: Connection to CIB failed: connection failed
crmd[12126]: 2008/06/13_14:04:20 debug: cib_native_signoff: Signing out of the CIB Service
cib[12122]: 2008/06/13_14:04:20 info: G_main_add_SignalHandler: Added signal handler for signal 15
cib[12122]: 2008/06/13_14:04:20 info: G_main_add_TriggerHandler: Added signal manual handler
attrd[12125]: 2008/06/13_14:04:20 info: G_main_add_SignalHandler: Added signal handler for signal 15
attrd[12125]: 2008/06/13_14:04:20 info: main: Starting up....
heartbeat[12113]: 2008/06/13_14:04:20 debug: APIregistration_dispatch() {
attrd[12125]: 2008/06/13_14:04:20 debug: register_heartbeat_conn: Signing in with Heartbeat
heartbeat[12113]: 2008/06/13_14:04:20 debug: process_registerevent() {
heartbeat[12113]: 2008/06/13_14:04:20 debug: client->gsource = 0x9a48f8
heartbeat[12113]: 2008/06/13_14:04:20 debug: }/*process_registerevent*/;
heartbeat[12113]: 2008/06/13_14:04:20 debug: }/*APIregistration_dispatch*/;
heartbeat[12113]: 2008/06/13_14:04:20 debug: Checking client authorization for client attrd (90:90)
heartbeat[12113]: 2008/06/13_14:04:20 debug: create_seq_snapshot_table:no missing packets found for node node-a
ccm[12121]: 2008/06/13_14:04:20 info: Hostname: node-b
heartbeat[12113]: 2008/06/13_14:04:20 debug: create_seq_snapshot_table:no missing packets found for node node-b
heartbeat[12113]: 2008/06/13_14:04:20 debug: Signing on API client 12125 (attrd)
cib[12122]: 2008/06/13_14:04:20 info: G_main_add_SignalHandler: Added signal handler for signal 17
cib[12122]: 2008/06/13_14:04:20 info: main: Retrieval of a per-action CIB: disabled
cib[12122]: 2008/06/13_14:04:20 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
cib[12122]: 2008/06/13_14:04:20 WARN: retrieveCib: Cluster configuration not found: /var/lib/heartbeat/crm/cib.xml
cib[12122]: 2008/06/13_14:04:20 WARN: readCibXmlFile: Primary configuration corrupt or unusable, trying backup...
cib[12122]: 2008/06/13_14:04:20 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml.last (digest: /var/lib/heartbeat/crm/cib.xml.sig.last)
cib[12122]: 2008/06/13_14:04:20 WARN: retrieveCib: Cluster configuration not found: /var/lib/heartbeat/crm/cib.xml.last
cib[12122]: 2008/06/13_14:04:20 WARN: readCibXmlFile: Continuing with an empty configuration.
cib[12122]: 2008/06/13_14:04:20 info: log_data_element: readCibXmlFile: [on-disk] <cib epoch="0" num_updates="0" admin_epoch="0">
cib[12122]: 2008/06/13_14:04:20 info: log_data_element: readCibXmlFile: [on-disk]   <configuration>
cib[12122]: 2008/06/13_14:04:20 info: log_data_element: readCibXmlFile: [on-disk]     <crm_config/>
cib[12122]: 2008/06/13_14:04:20 info: log_data_element: readCibXmlFile: [on-disk]     <nodes/>
cib[12122]: 2008/06/13_14:04:20 info: log_data_element: readCibXmlFile: [on-disk]     <resources/>
cib[12122]: 2008/06/13_14:04:20 info: log_data_element: readCibXmlFile: [on-disk]     <constraints/>
cib[12122]: 2008/06/13_14:04:20 info: log_data_element: readCibXmlFile: [on-disk]   </configuration>
cib[12122]: 2008/06/13_14:04:20 info: log_data_element: readCibXmlFile: [on-disk]   <status/>
cib[12122]: 2008/06/13_14:04:20 info: log_data_element: readCibXmlFile: [on-disk] </cib>
cib[12122]: 2008/06/13_14:04:20 debug: update_validation: Testing 'none' validation
cib[12122]: 2008/06/13_14:04:20 info: validate_with: Validating with: <null> (type=0)
cib[12122]: 2008/06/13_14:04:20 debug: update_validation: Testing 'pacemaker-0.6' validation
cib[12122]: 2008/06/13_14:04:20 info: validate_with: Validating with: /usr/share/heartbeat/crm.dtd (type=1)
lrmd[12123]: 2008/06/13_14:04:20 info: G_main_add_SignalHandler: Added signal handler for signal 15
lrmd[12123]: 2008/06/13_14:04:20 debug: LRM debug level set to 1
heartbeat[12113]: 2008/06/13_14:04:20 info: Managed write_hostcachedata process 12127 exited with return code 0.
attrd[12125]: 2008/06/13_14:04:20 info: register_heartbeat_conn: Hostname: node-b
lrmd[12123]: 2008/06/13_14:04:20 info: G_main_add_SignalHandler: Added signal handler for signal 17
lrmd[12123]: 2008/06/13_14:04:20 debug: Enabling coredumps
lrmd[12123]: 2008/06/13_14:04:20 info: G_main_add_SignalHandler: Added signal handler for signal 10
heartbeat[12124]: 2008/06/13_14:04:20 info: Starting "/usr/lib64/heartbeat/stonithd" as uid 0  gid 0 (pid 12124)
lrmd[12123]: 2008/06/13_14:04:20 info: G_main_add_SignalHandler: Added signal handler for signal 12
lrmd[12123]: 2008/06/13_14:04:20 debug: main: run the loop...
lrmd[12123]: 2008/06/13_14:04:20 info: Started.
attrd[12125]: 2008/06/13_14:04:20 info: register_heartbeat_conn: UUID: db8f2da4-a7fb-40bf-bf14-befe4af11db7
attrd[12125]: 2008/06/13_14:04:20 debug: main: CIB signon attempt 0
attrd[12125]: 2008/06/13_14:04:20 debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/heartbeat/crm/cib_rw
attrd[12125]: 2008/06/13_14:04:20 debug: init_client_ipc_comms_nodispatch: Could not init comms on: /var/run/heartbeat/crm/cib_rw
attrd[12125]: 2008/06/13_14:04:20 debug: cib_native_signon: Connection to command channel failed
cib[12122]: 2008/06/13_14:04:20 debug: update_validation: Testing 'transitional-0.6' validation
cib[12122]: 2008/06/13_14:04:20 info: validate_with: Validating with: /usr/share/heartbeat/crm-transitional.dtd (type=1)
attrd[12125]: 2008/06/13_14:04:20 debug: cib_native_signon: Connection to CIB failed: connection failed
attrd[12125]: 2008/06/13_14:04:20 debug: cib_native_signoff: Signing out of the CIB Service
cib[12122]: 2008/06/13_14:04:20 debug: update_validation: Testing 'pacemaker-0.7' validation
cib[12122]: 2008/06/13_14:04:20 info: validate_with: Validating with: /usr/share/heartbeat/pacemaker-0.7.rng (type=2)
stonithd[12124]: 2008/06/13_14:04:20 info: G_main_add_SignalHandler: Added signal handler for signal 10
stonithd[12124]: 2008/06/13_14:04:20 info: G_main_add_SignalHandler: Added signal handler for signal 12
stonithd[12124]: 2008/06/13_14:04:20 debug: pid 12124 locked in memory.
heartbeat[12113]: 2008/06/13_14:04:20 debug: APIregistration_dispatch() {
heartbeat[12113]: 2008/06/13_14:04:20 debug: process_registerevent() {
heartbeat[12113]: 2008/06/13_14:04:20 debug: client->gsource = 0x9a5278
heartbeat[12113]: 2008/06/13_14:04:20 debug: }/*process_registerevent*/;
heartbeat[12113]: 2008/06/13_14:04:20 debug: }/*APIregistration_dispatch*/;
heartbeat[12113]: 2008/06/13_14:04:20 debug: Checking client authorization for client stonithd (0:0)
heartbeat[12113]: 2008/06/13_14:04:20 debug: create_seq_snapshot_table:no missing packets found for node node-a
heartbeat[12113]: 2008/06/13_14:04:20 debug: create_seq_snapshot_table:no missing packets found for node node-b
heartbeat[12113]: 2008/06/13_14:04:20 debug: Signing on API client 12124 (stonithd)
cib[12122]: 2008/06/13_14:04:20 notice: update_validation: Upgraded from <none> to transitional-0.6 validation
cib[12122]: 2008/06/13_14:04:20 notice: readCibXmlFile: Enabling transitional-0.6 validation on the existing (sane) configuration
cib[12122]: 2008/06/13_14:04:20 debug: activateCibXml: Triggering CIB write for start op
heartbeat[12113]: 2008/06/13_14:04:20 debug: APIregistration_dispatch() {
cib[12122]: 2008/06/13_14:04:20 info: startCib: CIB Initialization completed successfully
heartbeat[12113]: 2008/06/13_14:04:20 debug: process_registerevent() {
cib[12122]: 2008/06/13_14:04:20 debug: register_heartbeat_conn: Signing in with Heartbeat
heartbeat[12113]: 2008/06/13_14:04:20 debug: client->gsource = 0x9a85a8
heartbeat[12113]: 2008/06/13_14:04:20 debug: }/*process_registerevent*/;
heartbeat[12113]: 2008/06/13_14:04:20 debug: }/*APIregistration_dispatch*/;
heartbeat[12113]: 2008/06/13_14:04:20 debug: Checking client authorization for client cib (90:90)
heartbeat[12113]: 2008/06/13_14:04:20 debug: create_seq_snapshot_table:no missing packets found for node node-a
heartbeat[12113]: 2008/06/13_14:04:20 debug: create_seq_snapshot_table:no missing packets found for node node-b
heartbeat[12113]: 2008/06/13_14:04:20 debug: Signing on API client 12122 (cib)
heartbeat[12113]: 2008/06/13_14:04:21 info: Status update for node node-a: status active
heartbeat[12113]: 2008/06/13_14:04:21 debug: Status seqno: 7 msgtime: 1213333460
cib[12122]: 2008/06/13_14:04:21 info: register_heartbeat_conn: Hostname: node-b
cib[12122]: 2008/06/13_14:04:21 info: register_heartbeat_conn: UUID: db8f2da4-a7fb-40bf-bf14-befe4af11db7
cib[12122]: 2008/06/13_14:04:21 info: ccm_connect: Registering with CCM...
cib[12122]: 2008/06/13_14:04:21 WARN: ccm_connect: CCM Activation failed
stonithd[12124]: 2008/06/13_14:04:21 info: register_heartbeat_conn: Hostname: node-b
cib[12122]: 2008/06/13_14:04:21 WARN: ccm_connect: CCM Connection failed 1 times (30 max)
stonithd[12124]: 2008/06/13_14:04:21 info: register_heartbeat_conn: UUID: db8f2da4-a7fb-40bf-bf14-befe4af11db7
stonithd[12124]: 2008/06/13_14:04:21 debug: Setting message filter mode
stonithd[12124]: 2008/06/13_14:04:21 debug: apichan=0x1081bc98
stonithd[12124]: 2008/06/13_14:04:21 debug: callback_chan=0x1081bf18
stonithd[12124]: 2008/06/13_14:04:21 notice: /usr/lib64/heartbeat/stonithd start up successfully.
stonithd[12124]: 2008/06/13_14:04:21 info: G_main_add_SignalHandler: Added signal handler for signal 17
crmd[12126]: 2008/06/13_14:04:21 debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/heartbeat/crm/cib_rw
crmd[12126]: 2008/06/13_14:04:21 debug: init_client_ipc_comms_nodispatch: Could not init comms on: /var/run/heartbeat/crm/cib_rw
crmd[12126]: 2008/06/13_14:04:21 debug: cib_native_signon: Connection to command channel failed
crmd[12126]: 2008/06/13_14:04:21 debug: cib_native_signon: Connection to CIB failed: connection failed
crmd[12126]: 2008/06/13_14:04:21 debug: cib_native_signoff: Signing out of the CIB Service
crmd[12126]: 2008/06/13_14:04:21 debug: do_cib_control: Could not connect to the CIB service
crmd[12126]: 2008/06/13_14:04:21 WARN: do_cib_control: Couldn't complete CIB registration 1 times... pause and retry
crmd[12126]: 2008/06/13_14:04:21 debug: crm_timer_start: Started Wait Timer (I_NULL:2000ms), src=5
crmd[12126]: 2008/06/13_14:04:21 debug: register_fsa_input_adv: do_cib_control prepended FSA input 2 (I_WAIT_FOR_EVENT) (cause=C_FSA_INTERNAL) without data
crmd[12126]: 2008/06/13_14:04:21 debug: register_fsa_input_adv: Stalling the FSA pending further input: cause=C_FSA_INTERNAL
crmd[12126]: 2008/06/13_14:04:21 debug: s_crmd_fsa: Exiting the FSA: queue=0, fsa_actions=0x180021000000006, stalled=true
crmd[12126]: 2008/06/13_14:04:21 info: crmd_init: Starting crmd's mainloop
crmd[12126]: 2008/06/13_14:04:23 info: crm_timer_popped: Wait Timer (I_NULL) just popped!
crmd[12126]: 2008/06/13_14:04:23 debug: crm_timer_stop: Stopping Wait Timer (I_NULL:2000ms), src=5
crmd[12126]: 2008/06/13_14:04:23 debug: do_fsa_action: actions:trace: 	// A_CIB_START
crmd[12126]: 2008/06/13_14:04:23 debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/heartbeat/crm/cib_rw
crmd[12126]: 2008/06/13_14:04:23 debug: init_client_ipc_comms_nodispatch: Could not init comms on: /var/run/heartbeat/crm/cib_rw
crmd[12126]: 2008/06/13_14:04:23 debug: cib_native_signon: Connection to command channel failed
crmd[12126]: 2008/06/13_14:04:23 debug: cib_native_signon: Connection to CIB failed: connection failed
crmd[12126]: 2008/06/13_14:04:23 debug: cib_native_signoff: Signing out of the CIB Service
ccm[12121]: 2008/06/13_14:04:24 debug: node state CCM_STATE_NONE -> CCM_STATE_NONE
ccm[12121]: 2008/06/13_14:04:24 debug: node state CCM_STATE_NONE -> CCM_STATE_NONE
ccm[12121]: 2008/06/13_14:04:24 info: G_main_add_SignalHandler: Added signal handler for signal 15
cib[12122]: 2008/06/13_14:04:24 info: ccm_connect: Registering with CCM...
cib[12122]: 2008/06/13_14:04:24 debug: ccm_connect: CCM Activation passed... all set to go!
cib[12122]: 2008/06/13_14:04:24 info: cib_init: Requesting the list of configured nodes
cib[12122]: 2008/06/13_14:04:24 debug: Delaying cstatus request for 0 ms
cib[12122]: 2008/06/13_14:04:24 info: cib_init: Starting cib mainloop
cib[12122]: 2008/06/13_14:04:24 info: cib_client_status_callback: Status update: Client node-b/cib now has status [join]
cib[12122]: 2008/06/13_14:04:24 info: crm_update_peer: Creating entry for node node-b/0/0
cib[12122]: 2008/06/13_14:04:24 info: crm_update_peer_proc: node-b.cib is now online
cib[12122]: 2008/06/13_14:04:24 info: cib_client_status_callback: Status update: Client node-a/cib now has status [join]
cib[12122]: 2008/06/13_14:04:24 info: crm_update_peer: Creating entry for node node-a/0/0
cib[12122]: 2008/06/13_14:04:24 info: crm_update_peer_proc: node-a.cib is now online
cib[12122]: 2008/06/13_14:04:24 info: cib_client_status_callback: Status update: Client node-b/cib now has status [online]
cib[12122]: 2008/06/13_14:04:24 debug: Forking temp process write_cib_contents
cib[12129]: 2008/06/13_14:04:24 debug: write_cib_contents: Wrote CIB to disk
cib[12129]: 2008/06/13_14:04:24 info: write_cib_contents: Wrote version 0.0.0 of the CIB to disk (digest: c8f39821c7d4a65fa661bedd63fdc21a)
cib[12129]: 2008/06/13_14:04:24 debug: write_cib_contents: Wrote digest to disk
cib[12129]: 2008/06/13_14:04:24 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
cib[12129]: 2008/06/13_14:04:24 debug: write_cib_contents: Wrote and verified CIB
cib[12122]: 2008/06/13_14:04:24 info: Managed write_cib_contents process 12129 exited with return code 0.
crmd[12126]: 2008/06/13_14:04:24 debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/heartbeat/crm/cib_rw
crmd[12126]: 2008/06/13_14:04:24 debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/heartbeat/crm/cib_callback
crmd[12126]: 2008/06/13_14:04:24 debug: cib_native_signon: Connection to CIB successful
crmd[12126]: 2008/06/13_14:04:24 info: do_cib_control: CIB connection established
crmd[12126]: 2008/06/13_14:04:24 debug: do_fsa_action: actions:trace: 	// A_HA_CONNECT
crmd[12126]: 2008/06/13_14:04:24 debug: register_heartbeat_conn: Signing in with Heartbeat
cib[12122]: 2008/06/13_14:04:24 info: cib_common_callback_worker: Setting cib_refresh_notify callbacks for 12126 (9d56824d-1739-4b62-9a5f-3d2fda208595): on
heartbeat[12113]: 2008/06/13_14:04:24 debug: APIregistration_dispatch() {
heartbeat[12113]: 2008/06/13_14:04:24 debug: process_registerevent() {
heartbeat[12113]: 2008/06/13_14:04:24 debug: client->gsource = 0x99c1b8
heartbeat[12113]: 2008/06/13_14:04:24 debug: }/*process_registerevent*/;
heartbeat[12113]: 2008/06/13_14:04:24 debug: }/*APIregistration_dispatch*/;
heartbeat[12113]: 2008/06/13_14:04:24 debug: Checking client authorization for client crmd (90:90)
heartbeat[12113]: 2008/06/13_14:04:24 debug: create_seq_snapshot_table:no missing packets found for node node-a
heartbeat[12113]: 2008/06/13_14:04:24 debug: create_seq_snapshot_table:no missing packets found for node node-b
heartbeat[12113]: 2008/06/13_14:04:24 debug: Signing on API client 12126 (crmd)
heartbeat[12113]: 2008/06/13_14:04:25 WARN: 1 lost packet(s) for [node-a] [16:18]
cib[12122]: 2008/06/13_14:04:25 info: cib_client_status_callback: Status update: Client node-a/cib now has status [online]
heartbeat[12113]: 2008/06/13_14:04:25 info: No pkts missing from node-a!
crmd[12126]: 2008/06/13_14:04:25 info: register_heartbeat_conn: Hostname: node-b
crmd[12126]: 2008/06/13_14:04:25 info: register_heartbeat_conn: UUID: db8f2da4-a7fb-40bf-bf14-befe4af11db7
ccm[12121]: 2008/06/13_14:04:25 debug: recv msg hbapi-clstat from node-b, status:join
crmd[12126]: 2008/06/13_14:04:25 debug: Delaying cstatus request for 97 ms
crmd[12126]: 2008/06/13_14:04:25 info: do_ha_control: Connected to Heartbeat
crmd[12126]: 2008/06/13_14:04:25 debug: do_fsa_action: actions:trace: 	// A_READCONFIG
crmd[12126]: 2008/06/13_14:04:25 debug: do_fsa_action: actions:trace: 	// A_LRM_CONNECT
crmd[12126]: 2008/06/13_14:04:25 debug: do_lrm_control: Connecting to the LRM
lrmd[12123]: 2008/06/13_14:04:25 debug: on_msg_register:client crmd [12126] registered
crmd[12126]: 2008/06/13_14:04:25 debug: do_lrm_control: LRM connection established
crmd[12126]: 2008/06/13_14:04:25 debug: do_fsa_action: actions:trace: 	// A_CCM_CONNECT
crmd[12126]: 2008/06/13_14:04:25 info: do_ccm_control: CCM connection established... waiting for first callback
crmd[12126]: 2008/06/13_14:04:25 debug: do_fsa_action: actions:trace: 	// A_STARTED
crmd[12126]: 2008/06/13_14:04:25 info: do_started: Delaying start, CCM (0000000000100000) not connected
crmd[12126]: 2008/06/13_14:04:25 debug: register_fsa_input_adv: do_started prepended FSA input 3 (I_WAIT_FOR_EVENT) (cause=C_FSA_INTERNAL) without data
crmd[12126]: 2008/06/13_14:04:25 debug: register_fsa_input_adv: Stalling the FSA pending further input: cause=C_FSA_INTERNAL
crmd[12126]: 2008/06/13_14:04:25 debug: s_crmd_fsa: Exiting the FSA: queue=0, fsa_actions=0x2, stalled=true
crmd[12126]: 2008/06/13_14:04:25 debug: fsa_dump_inputs: Added input: 0000000000000100 (R_CIB_CONNECTED)
crmd[12126]: 2008/06/13_14:04:25 debug: fsa_dump_inputs: Added input: 0000000000000800 (R_LRM_CONNECTED)
crmd[12126]: 2008/06/13_14:04:25 debug: config_query_callback: Call 3 : Parsing CIB options
crmd[12126]: 2008/06/13_14:04:25 notice: crmd_client_status_callback: Status update: Client node-b/crmd now has status [online] (DC=false)
crmd[12126]: 2008/06/13_14:04:25 info: crm_update_peer: Creating entry for node node-b/0/0
crmd[12126]: 2008/06/13_14:04:25 info: crm_update_peer_proc: node-b.crmd is now online
crmd[12126]: 2008/06/13_14:04:25 info: crmd_client_status_callback: Not the DC
crmd[12126]: 2008/06/13_14:04:25 notice: crmd_client_status_callback: Status update: Client node-a/crmd now has status [online] (DC=false)
attrd[12125]: 2008/06/13_14:04:25 debug: main: CIB signon attempt 1
attrd[12125]: 2008/06/13_14:04:25 debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/heartbeat/crm/cib_rw
attrd[12125]: 2008/06/13_14:04:25 debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/heartbeat/crm/cib_callback
attrd[12125]: 2008/06/13_14:04:25 debug: cib_native_signon: Connection to CIB successful
ccm[12121]: 2008/06/13_14:04:26 debug: recv msg status from node-a, status:active
ccm[12121]: 2008/06/13_14:04:26 debug: status of node node-a: active -> active
crmd[12126]: 2008/06/13_14:04:26 info: crm_update_peer: Creating entry for node node-a/0/0
crmd[12126]: 2008/06/13_14:04:26 info: crm_update_peer_proc: node-a.crmd is now online
crmd[12126]: 2008/06/13_14:04:26 info: crmd_client_status_callback: Not the DC
crmd[12126]: 2008/06/13_14:04:26 notice: crmd_client_status_callback: Status update: Client node-b/crmd now has status [online] (DC=false)
crmd[12126]: 2008/06/13_14:04:26 info: crmd_client_status_callback: Not the DC
crmd[12126]: 2008/06/13_14:04:26 notice: crmd_client_status_callback: Status update: Client node-a/crmd now has status [online] (DC=false)
crmd[12126]: 2008/06/13_14:04:26 info: crmd_client_status_callback: Not the DC
crmd[12126]: 2008/06/13_14:04:26 debug: do_fsa_action: actions:trace: 	// A_STARTED
crmd[12126]: 2008/06/13_14:04:26 info: do_started: Delaying start, CCM (0000000000100000) not connected
crmd[12126]: 2008/06/13_14:04:26 debug: register_fsa_input_adv: do_started prepended FSA input 4 (I_WAIT_FOR_EVENT) (cause=C_FSA_INTERNAL) without data
crmd[12126]: 2008/06/13_14:04:26 debug: register_fsa_input_adv: Stalling the FSA pending further input: cause=C_FSA_INTERNAL
crmd[12126]: 2008/06/13_14:04:26 debug: s_crmd_fsa: Exiting the FSA: queue=0, fsa_actions=0x2, stalled=true
heartbeat[12113]: 2008/06/13_14:04:26 WARN: 1 lost packet(s) for [node-a] [20:22]
heartbeat[12113]: 2008/06/13_14:04:26 info: No pkts missing from node-a!
ccm[12121]: 2008/06/13_14:04:27 debug: recv msg hbapi-clstat from node-a, status:join
ccm[12121]: 2008/06/13_14:04:28 debug: recv msg CCM_TYPE_PROTOVERSION from node-a, status:[null ptr]
ccm[12121]: 2008/06/13_14:04:28 debug: send msg CCM_TYPE_PROTOVERSION to cluster, status:[null]
ccm[12121]: 2008/06/13_14:04:28 debug: node state CCM_STATE_NONE -> CCM_STATE_VERSION_REQUEST
ccm[12121]: 2008/06/13_14:04:28 debug: recv msg CCM_TYPE_PROTOVERSION from node-b, status:[null ptr]
ccm[12121]: 2008/06/13_14:04:28 debug: No quorum selected,using default quorum plugin(majority:twonodes)
ccm[12121]: 2008/06/13_14:04:28 debug: quorum plugin: majority
ccm[12121]: 2008/06/13_14:04:28 debug: cluster:linux-ha, member_count=1, member_quorum_votes=100
ccm[12121]: 2008/06/13_14:04:28 debug: total_node_count=2, total_quorum_votes=200
ccm[12121]: 2008/06/13_14:04:28 debug: quorum plugin: twonodes
crmd[12126]: 2008/06/13_14:04:28 info: mem_handle_event: Got an event OC_EV_MS_NEW_MEMBERSHIP from ccm
ccm[12121]: 2008/06/13_14:04:28 debug: cluster:linux-ha, member_count=1, member_quorum_votes=100
crmd[12126]: 2008/06/13_14:04:28 info: mem_handle_event: instance=1, nodes=1, new=1, lost=0, n_idx=0, new_idx=0, old_idx=3
ccm[12121]: 2008/06/13_14:04:28 debug: total_node_count=2, total_quorum_votes=200
crmd[12126]: 2008/06/13_14:04:28 info: crmd_ccm_msg_callback: Quorum (re)attained after event=NEW MEMBERSHIP (id=1)
ccm[12121]: 2008/06/13_14:04:28 info: Break tie for 2 nodes cluster
ccm[12121]: 2008/06/13_14:04:28 debug: node state CCM_STATE_VERSION_REQUEST -> CCM_STATE_JOINED
ccm[12121]: 2008/06/13_14:04:28 debug: dump current membership 0x2aaaab317028
crmd[12126]: 2008/06/13_14:04:28 info: crm_update_quorum: Updating quorum status to true (call=4)
cib[12122]: 2008/06/13_14:04:28 info: mem_handle_event: Got an event OC_EV_MS_NEW_MEMBERSHIP from ccm
ccm[12121]: 2008/06/13_14:04:28 debug: 	leader=node-b
crmd[12126]: 2008/06/13_14:04:28 info: ccm_event_detail: NEW MEMBERSHIP: trans=1, nodes=1, new=1, lost=0 n_idx=0, new_idx=0, old_idx=3
cib[12122]: 2008/06/13_14:04:28 info: mem_handle_event: instance=1, nodes=1, new=1, lost=0, n_idx=0, new_idx=0, old_idx=3
ccm[12121]: 2008/06/13_14:04:28 debug: 	transition=1
crmd[12126]: 2008/06/13_14:04:28 info: ccm_event_detail: 	CURRENT: node-b [nodeid=1, born=1]
cib[12122]: 2008/06/13_14:04:28 info: cib_ccm_msg_callback: Processing CCM event=NEW MEMBERSHIP (id=1)
ccm[12121]: 2008/06/13_14:04:28 debug: 	status=CCM_STATE_JOINED
crmd[12126]: 2008/06/13_14:04:28 info: ccm_event_detail: 	NEW:     node-b [nodeid=1, born=1]
cib[12122]: 2008/06/13_14:04:28 info: crm_update_peer: Node node-b now has id 1
ccm[12121]: 2008/06/13_14:04:28 debug: 	has_quorum=1
cib[12122]: 2008/06/13_14:04:28 info: crm_update_peer: Node node-b is now: member
ccm[12121]: 2008/06/13_14:04:28 debug: 	nodename=node-b bornon=1
cib[12122]: 2008/06/13_14:04:28 info: crm_update_peer_proc: node-b.ais is now online
ccm[12121]: 2008/06/13_14:04:28 debug: quorum is 1
cib[12122]: 2008/06/13_14:04:28 info: validate_xml: Validating configuration with transitional-0.6: /usr/share/heartbeat/crm-transitional.dtd
ccm[12121]: 2008/06/13_14:04:28 debug: delivering new membership to 2 clients: 
cib[12122]: 2008/06/13_14:04:28 info: validate_with: Validating with: /usr/share/heartbeat/crm-transitional.dtd (type=1)
ccm[12121]: 2008/06/13_14:04:28 debug: client: pid =12126
ccm[12121]: 2008/06/13_14:04:28 debug: client: pid =12122
ccm[12121]: 2008/06/13_14:04:28 debug: send msg CCM_TYPE_PROTOVERSION_RESP to node-a, status:[null]
cib[12122]: 2008/06/13_14:04:28 debug: activateCibXml: Triggering CIB write for cib_modify op
cib[12122]: 2008/06/13_14:04:28 debug: send_peer_reply: Sending update diff 0.0.0 -> 0.1.1
cib[12122]: 2008/06/13_14:04:28 debug: Forking temp process write_cib_contents
cib[12130]: 2008/06/13_14:04:28 debug: write_cib_contents: Archiving current version
cib[12130]: 2008/06/13_14:04:28 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
cib[12130]: 2008/06/13_14:04:28 debug: archive_file: /var/lib/heartbeat/crm/cib.xml archived as /var/lib/heartbeat/crm/cib.xml.last
crmd[12126]: 2008/06/13_14:04:28 info: crm_update_peer: Node node-b now has id 1
crmd[12126]: 2008/06/13_14:04:28 info: crm_update_peer: Node node-b is now: member
crmd[12126]: 2008/06/13_14:04:28 info: crm_update_peer_proc: node-b.ais is now online
crmd[12126]: 2008/06/13_14:04:28 debug: post_cache_update: Updated cache after membership event 1.
crmd[12126]: 2008/06/13_14:04:28 info: do_update_cib_nodes: Non-DCs dont update node status - they get it from the DC
crmd[12126]: 2008/06/13_14:04:28 debug: do_fsa_action: actions:trace: 	// A_STARTED
crmd[12126]: 2008/06/13_14:04:28 debug: do_started: Init server comms
crmd[12126]: 2008/06/13_14:04:28 info: do_started: The local CRM is operational
crmd[12126]: 2008/06/13_14:04:28 debug: register_fsa_input_adv: do_started appended FSA input 5 (I_PENDING) (cause=C_FSA_INTERNAL) without data
crmd[12126]: 2008/06/13_14:04:28 debug: s_crmd_fsa: Processing I_PENDING: [ state=S_STARTING cause=C_FSA_INTERNAL origin=do_started ]
crmd[12126]: 2008/06/13_14:04:28 debug: do_fsa_action: actions:trace: 	// A_LOG   
crmd[12126]: 2008/06/13_14:04:28 info: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
crmd[12126]: 2008/06/13_14:04:28 debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
crmd[12126]: 2008/06/13_14:04:28 debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
crmd[12126]: 2008/06/13_14:04:28 debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_QUERY
cib[12130]: 2008/06/13_14:04:28 debug: archive_file: /var/lib/heartbeat/crm/cib.xml.sig archived as /var/lib/heartbeat/crm/cib.xml.sig.last
cib[12130]: 2008/06/13_14:04:28 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
cib[12130]: 2008/06/13_14:04:28 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml.last (digest: /var/lib/heartbeat/crm/cib.xml.sig.last)
cib[12130]: 2008/06/13_14:04:28 debug: write_cib_contents: Verified CIB archive
cib[12130]: 2008/06/13_14:04:28 debug: write_cib_contents: Wrote CIB to disk
cib[12130]: 2008/06/13_14:04:28 info: write_cib_contents: Wrote version 0.1.1 of the CIB to disk (digest: 953e86a3a5d8f267755c7bb0d9d3f044)
cib[12130]: 2008/06/13_14:04:28 debug: write_cib_contents: Wrote digest to disk
cib[12130]: 2008/06/13_14:04:28 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
cib[12130]: 2008/06/13_14:04:28 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml.last (digest: /var/lib/heartbeat/crm/cib.xml.sig.last)
cib[12130]: 2008/06/13_14:04:28 debug: write_cib_contents: Wrote and verified CIB
cib[12122]: 2008/06/13_14:04:28 info: Managed write_cib_contents process 12130 exited with return code 0.
ccm[12121]: 2008/06/13_14:04:29 WARN: ccm_state_joined: received message with unknown cookie, just dropping
ccm[12121]: 2008/06/13_14:04:29 debug: dump current membership 0x2aaaab317028
ccm[12121]: 2008/06/13_14:04:29 debug: 	leader=node-b
ccm[12121]: 2008/06/13_14:04:29 debug: 	transition=1
ccm[12121]: 2008/06/13_14:04:29 debug: 	status=CCM_STATE_JOINED
ccm[12121]: 2008/06/13_14:04:29 debug: 	has_quorum=1
ccm[12121]: 2008/06/13_14:04:29 debug: 	nodename=node-b bornon=1
ccm[12121]: 2008/06/13_14:04:29 debug: recv msg CCM_TYPE_ALIVE from node-a, status:[null ptr]
ccm[12121]: 2008/06/13_14:04:29 debug: quorum plugin: majority
ccm[12121]: 2008/06/13_14:04:29 debug: cluster:linux-ha, member_count=2, member_quorum_votes=200
ccm[12121]: 2008/06/13_14:04:29 debug: total_node_count=2, total_quorum_votes=200
cib[12122]: 2008/06/13_14:04:29 info: mem_handle_event: Got an event OC_EV_MS_INVALID from ccm
ccm[12121]: 2008/06/13_14:04:29 debug: send msg CCM_TYPE_MEM_LIST to cluster, status:[null]
cib[12122]: 2008/06/13_14:04:29 info: mem_handle_event: no mbr_track info
ccm[12121]: 2008/06/13_14:04:29 debug: dump current membership 0x2aaaab317028
cib[12122]: 2008/06/13_14:04:29 info: mem_handle_event: Got an event OC_EV_MS_NEW_MEMBERSHIP from ccm
ccm[12121]: 2008/06/13_14:04:29 debug: 	leader=node-b
cib[12122]: 2008/06/13_14:04:29 info: mem_handle_event: instance=2, nodes=2, new=1, lost=0, n_idx=0, new_idx=2, old_idx=4
ccm[12121]: 2008/06/13_14:04:29 debug: 	transition=2
cib[12122]: 2008/06/13_14:04:29 info: cib_ccm_msg_callback: Processing CCM event=NEW MEMBERSHIP (id=2)
ccm[12121]: 2008/06/13_14:04:29 debug: 	status=CCM_STATE_JOINED
cib[12122]: 2008/06/13_14:04:29 info: crm_update_peer: Node node-a is now: member
ccm[12121]: 2008/06/13_14:04:29 debug: 	has_quorum=1
cib[12122]: 2008/06/13_14:04:29 info: crm_update_peer_proc: node-a.ais is now online
ccm[12121]: 2008/06/13_14:04:29 debug: 	nodename=node-b bornon=1
ccm[12121]: 2008/06/13_14:04:29 debug: 	nodename=node-a bornon=2
ccm[12121]: 2008/06/13_14:04:29 debug: quorum is 1
ccm[12121]: 2008/06/13_14:04:29 debug: delivering new membership to 2 clients: 
ccm[12121]: 2008/06/13_14:04:29 debug: client: pid =12126
ccm[12121]: 2008/06/13_14:04:29 debug: client: pid =12122
ccm[12121]: 2008/06/13_14:04:29 debug: recv msg CCM_TYPE_MEM_LIST from node-b, status:[null ptr]
ccm[12121]: 2008/06/13_14:04:29 WARN: ccm_state_joined: received message with unknown cookie, just dropping
ccm[12121]: 2008/06/13_14:04:29 debug: dump current membership 0x2aaaab317028
ccm[12121]: 2008/06/13_14:04:29 debug: 	leader=node-b
ccm[12121]: 2008/06/13_14:04:29 debug: 	transition=2
ccm[12121]: 2008/06/13_14:04:29 debug: 	status=CCM_STATE_JOINED
ccm[12121]: 2008/06/13_14:04:29 debug: 	has_quorum=1
ccm[12121]: 2008/06/13_14:04:29 debug: 	nodename=node-b bornon=1
ccm[12121]: 2008/06/13_14:04:29 debug: 	nodename=node-a bornon=2
crmd[12126]: 2008/06/13_14:04:29 debug: do_cl_join_query: Querying for a DC
crmd[12126]: 2008/06/13_14:04:29 debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_START
crmd[12126]: 2008/06/13_14:04:29 debug: crm_timer_start: Started Election Trigger (I_DC_TIMEOUT:30000ms), src=14
crmd[12126]: 2008/06/13_14:04:29 debug: cib_quorum_update_complete: Quorum update 4 complete
crmd[12126]: 2008/06/13_14:04:29 info: mem_handle_event: Got an event OC_EV_MS_INVALID from ccm
crmd[12126]: 2008/06/13_14:04:29 info: mem_handle_event: no mbr_track info
crmd[12126]: 2008/06/13_14:04:29 info: mem_handle_event: Got an event OC_EV_MS_NEW_MEMBERSHIP from ccm
crmd[12126]: 2008/06/13_14:04:29 info: mem_handle_event: instance=2, nodes=2, new=1, lost=0, n_idx=0, new_idx=2, old_idx=4
crmd[12126]: 2008/06/13_14:04:29 info: crmd_ccm_msg_callback: Quorum (re)attained after event=NEW MEMBERSHIP (id=2)
crmd[12126]: 2008/06/13_14:04:29 info: crm_update_quorum: Updating quorum status to true (call=6)
crmd[12126]: 2008/06/13_14:04:29 info: ccm_event_detail: NEW MEMBERSHIP: trans=2, nodes=2, new=1, lost=0 n_idx=0, new_idx=2, old_idx=4
crmd[12126]: 2008/06/13_14:04:29 info: ccm_event_detail: 	CURRENT: node-b [nodeid=1, born=1]
cib[12122]: 2008/06/13_14:04:29 info: validate_xml: Validating configuration with transitional-0.6: /usr/share/heartbeat/crm-transitional.dtd
crmd[12126]: 2008/06/13_14:04:29 info: ccm_event_detail: 	CURRENT: node-a [nodeid=0, born=2]
cib[12122]: 2008/06/13_14:04:29 info: validate_with: Validating with: /usr/share/heartbeat/crm-transitional.dtd (type=1)
crmd[12126]: 2008/06/13_14:04:29 info: ccm_event_detail: 	NEW:     node-a [nodeid=0, born=2]
cib[12122]: 2008/06/13_14:04:29 debug: send_peer_reply: Sending update diff 0.1.1 -> 0.1.2
crmd[12126]: 2008/06/13_14:04:29 info: crm_update_peer: Node node-a is now: member
crmd[12126]: 2008/06/13_14:04:29 info: crm_update_peer_proc: node-a.ais is now online
crmd[12126]: 2008/06/13_14:04:29 debug: post_cache_update: Updated cache after membership event 2.
crmd[12126]: 2008/06/13_14:04:29 info: do_update_cib_nodes: Non-DCs dont update node status - they get it from the DC
crmd[12126]: 2008/06/13_14:04:29 debug: cib_quorum_update_complete: Quorum update 6 complete
cib[12122]: 2008/06/13_14:04:30 WARN: cib_process_diff: Diff 0.0.0 -> 0.1.1 not applied to 0.1.2: current "epoch" is greater than required
cib[12122]: 2008/06/13_14:04:30 WARN: do_cib_notify: cib_apply_diff of <diff > FAILED: Application of an update diff failed
cib[12122]: 2008/06/13_14:04:30 WARN: cib_process_request: cib_apply_diff operation failed: Application of an update diff failed
attrd[12125]: 2008/06/13_14:04:30 info: main: Starting mainloop...
crmd[12126]: 2008/06/13_14:04:59 info: crm_timer_popped: Election Trigger (I_DC_TIMEOUT) just popped!
crmd[12126]: 2008/06/13_14:04:59 debug: crm_timer_stop: Stopping Election Trigger (I_DC_TIMEOUT:30000ms), src=14
crmd[12126]: 2008/06/13_14:04:59 debug: register_fsa_input_adv: crm_timer_popped appended FSA input 6 (I_DC_TIMEOUT) (cause=C_TIMER_POPPED) without data
crmd[12126]: 2008/06/13_14:04:59 debug: s_crmd_fsa: Processing I_DC_TIMEOUT: [ state=S_PENDING cause=C_TIMER_POPPED origin=crm_timer_popped ]
crmd[12126]: 2008/06/13_14:04:59 debug: do_fsa_action: actions:trace: 	// A_WARN  
crmd[12126]: 2008/06/13_14:04:59 WARN: do_log: [[FSA]] Input I_DC_TIMEOUT from crm_timer_popped() received in state (S_PENDING)
crmd[12126]: 2008/06/13_14:04:59 info: do_state_transition: State transition S_PENDING -> S_ELECTION [ input=I_DC_TIMEOUT cause=C_TIMER_POPPED origin=crm_timer_popped ]
crmd[12126]: 2008/06/13_14:04:59 debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
crmd[12126]: 2008/06/13_14:04:59 debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
crmd[12126]: 2008/06/13_14:04:59 debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
crmd[12126]: 2008/06/13_14:04:59 debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
crmd[12126]: 2008/06/13_14:04:59 debug: do_election_vote: Destroying voted hash
crmd[12126]: 2008/06/13_14:04:59 debug: crm_timer_start: Started Election Timeout (I_ELECTION_DC:120000ms), src=16
crmd[12126]: 2008/06/13_14:04:59 debug: register_fsa_input_adv: handle_request appended FSA input 7 (I_NULL) (cause=C_HA_MESSAGE) with data
crmd[12126]: 2008/06/13_14:04:59 debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
crmd[12126]: 2008/06/13_14:04:59 debug: do_election_count_vote: Created voted hash
crmd[12126]: 2008/06/13_14:04:59 debug: do_election_count_vote: Election 2, owner: db8f2da4-a7fb-40bf-bf14-befe4af11db7
crmd[12126]: 2008/06/13_14:04:59 info: do_election_count_vote: Updated voted hash for node-b to vote
crmd[12126]: 2008/06/13_14:04:59 info: do_election_count_vote: Election ignore: our vote (node-b)
crmd[12126]: 2008/06/13_14:04:59 debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
crmd[12126]: 2008/06/13_14:04:59 info: do_election_check: Still waiting on 1 non-votes (2 total)
crmd[12126]: 2008/06/13_14:05:00 debug: register_fsa_input_adv: handle_request appended FSA input 8 (I_NULL) (cause=C_HA_MESSAGE) with data
cib[12122]: 2008/06/13_14:05:00 info: cib_common_callback_worker: Setting cib_diff_notify callbacks for 12126 (9d56824d-1739-4b62-9a5f-3d2fda208595): on
crmd[12126]: 2008/06/13_14:05:00 debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
crmd[12126]: 2008/06/13_14:05:00 debug: do_election_count_vote: Election 2, owner: db8f2da4-a7fb-40bf-bf14-befe4af11db7
crmd[12126]: 2008/06/13_14:05:00 info: do_election_count_vote: Updated voted hash for node-a to no-vote
crmd[12126]: 2008/06/13_14:05:00 info: do_election_count_vote: Election ignore: no-vote from node-a
crmd[12126]: 2008/06/13_14:05:00 debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
crmd[12126]: 2008/06/13_14:05:00 debug: crm_timer_stop: Stopping Election Timeout (I_ELECTION_DC:120000ms), src=16
crmd[12126]: 2008/06/13_14:05:00 debug: register_fsa_input_adv: do_election_check appended FSA input 9 (I_ELECTION_DC) (cause=C_FSA_INTERNAL) without data
crmd[12126]: 2008/06/13_14:05:00 debug: do_election_check: Destroying voted hash
crmd[12126]: 2008/06/13_14:05:00 debug: s_crmd_fsa: Processing I_ELECTION_DC: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_election_check ]
crmd[12126]: 2008/06/13_14:05:00 debug: do_fsa_action: actions:trace: 	// A_LOG   
crmd[12126]: 2008/06/13_14:05:00 info: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
crmd[12126]: 2008/06/13_14:05:00 debug: do_fsa_action: actions:trace: 	// A_TE_START
crmd[12126]: 2008/06/13_14:05:00 info: do_te_control: Registering TE UUID: 81471eca-6a9e-410b-b2b2-db41164a8f06
crmd[12126]: 2008/06/13_14:05:00 info: G_main_add_TriggerHandler: Added signal manual handler
crmd[12126]: 2008/06/13_14:05:00 info: G_main_add_TriggerHandler: Added signal manual handler
crmd[12126]: 2008/06/13_14:05:00 WARN: cib_client_add_notify_callback: Callback already present
crmd[12126]: 2008/06/13_14:05:00 info: set_graph_functions: Setting custom graph functions
crmd[12126]: 2008/06/13_14:05:00 info: unpack_graph: Unpacked transition -1: 0 actions in 0 synapses
crmd[12126]: 2008/06/13_14:05:00 debug: do_te_control: Transitioner is now active
crmd[12126]: 2008/06/13_14:05:00 debug: do_fsa_action: actions:trace: 	// A_PE_START
crmd[12126]: 2008/06/13_14:05:00 info: start_subsystem: Starting sub-system "pengine"
crmd[12132]: 2008/06/13_14:05:00 debug: start_subsystem: Executing "/usr/lib64/heartbeat/pengine (pengine)" (pid 12132)
pengine[12132]: 2008/06/13_14:05:00 info: G_main_add_SignalHandler: Added signal handler for signal 15
pengine[12132]: 2008/06/13_14:05:00 debug: main: Init server comms
pengine[12132]: 2008/06/13_14:05:00 info: main: Starting pengine
crmd[12126]: 2008/06/13_14:05:05 debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/heartbeat/crm/pengine
crmd[12126]: 2008/06/13_14:05:05 WARN: do_fsa_action: Action A_PE_START took 5010ms to complete
crmd[12126]: 2008/06/13_14:05:05 debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
crmd[12126]: 2008/06/13_14:05:05 debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_START
crmd[12126]: 2008/06/13_14:05:05 debug: crm_timer_start: Started Integration Timer (I_INTEGRATED:180000ms), src=20
crmd[12126]: 2008/06/13_14:05:05 debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
crmd[12126]: 2008/06/13_14:05:05 debug: do_fsa_action: actions:trace: 	// A_DC_TAKEOVER
cib[12122]: 2008/06/13_14:05:05 info: cib_process_readwrite: We are now in R/W mode
crmd[12126]: 2008/06/13_14:05:05 info: do_dc_takeover: Taking over DC status for this partition
cib[12122]: 2008/06/13_14:05:05 debug: update_validation: Testing 'transitional-0.6' validation
cib[12122]: 2008/06/13_14:05:05 info: validate_with: Validating with: /usr/share/heartbeat/crm-transitional.dtd (type=1)
cib[12122]: 2008/06/13_14:05:05 notice: update_validation: Upgrading transitional-0.6-style configuration to pacemaker-0.7 with /usr/share/heartbeat/upgrade.xsl
cib[12122]: 2008/06/13_14:05:05 info: validate_with: Validating with: /usr/share/heartbeat/pacemaker-0.7.rng (type=2)
cib[12122]: 2008/06/13_14:05:05 info: update_validation: Transformation /usr/share/heartbeat/upgrade.xsl successful
cib[12122]: 2008/06/13_14:05:05 notice: update_validation: Upgraded from transitional-0.6 to pacemaker-0.7 validation
cib[12122]: 2008/06/13_14:05:05 info: validate_xml: Validating configuration with pacemaker-0.7: /usr/share/heartbeat/pacemaker-0.7.rng
cib[12122]: 2008/06/13_14:05:05 info: validate_with: Validating with: /usr/share/heartbeat/pacemaker-0.7.rng (type=2)
cib[12122]: 2008/06/13_14:05:05 debug: activateCibXml: Triggering CIB write for cib_master op
cib[12122]: 2008/06/13_14:05:05 info: log_data_element: cib:diff: - <cib epoch="1" num_updates="2" validate-with="transitional-0.6"/>
cib[12122]: 2008/06/13_14:05:05 info: log_data_element: cib:diff: + <cib crm_feature_set="" epoch="2" num_updates="1" dc-uuid="0" remote-tls-port="0" validate-with="pacemaker-0.7"/>
cib[12122]: 2008/06/13_14:05:05 debug: send_peer_reply: Sending update diff 0.1.2 -> 0.2.1
cib[12122]: 2008/06/13_14:05:05 info: validate_xml: Validating configuration with pacemaker-0.7: /usr/share/heartbeat/pacemaker-0.7.rng
cib[12122]: 2008/06/13_14:05:05 info: validate_with: Validating with: /usr/share/heartbeat/pacemaker-0.7.rng (type=2)
cib[12122]: 2008/06/13_14:05:05 debug: activateCibXml: Triggering CIB write for cib_modify op
cib[12122]: 2008/06/13_14:05:05 info: log_data_element: cib:diff: - <cib crm_feature_set="" epoch="2"/>
cib[12122]: 2008/06/13_14:05:05 info: log_data_element: cib:diff: + <cib crm_feature_set="3.0" epoch="3"/>
cib[12122]: 2008/06/13_14:05:05 debug: send_peer_reply: Sending update diff 0.2.1 -> 0.3.1
crmd[12126]: 2008/06/13_14:05:05 debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_OFFER_ALL
cib[12122]: 2008/06/13_14:05:05 debug: Forking temp process write_cib_contents
crmd[12126]: 2008/06/13_14:05:05 debug: initialize_join: join-1: Initializing join data (flag=true)
cib[12133]: 2008/06/13_14:05:05 debug: write_cib_contents: Archiving current version
crmd[12126]: 2008/06/13_14:05:05 info: join_make_offer: Making join offers based on membership 2
cib[12133]: 2008/06/13_14:05:05 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
crmd[12126]: 2008/06/13_14:05:05 debug: join_make_offer: join-1: Sending offer to node-a
crmd[12126]: 2008/06/13_14:05:05 debug: join_make_offer: join-1: Sending offer to node-b
crmd[12126]: 2008/06/13_14:05:05 info: do_dc_join_offer_all: join-1: Waiting on 2 outstanding join acks
crmd[12126]: 2008/06/13_14:05:05 debug: fsa_dump_inputs: Added input: 0000000000000001 (R_THE_DC)
cib[12133]: 2008/06/13_14:05:05 debug: archive_file: /var/lib/heartbeat/crm/cib.xml archived as /var/lib/heartbeat/crm/cib.xml.last
crmd[12126]: 2008/06/13_14:05:05 debug: fsa_dump_inputs: Added input: 0000000000000010 (R_JOIN_OK)
crmd[12126]: 2008/06/13_14:05:05 debug: fsa_dump_inputs: Added input: 0000000000000080 (R_INVOKE_PE)
crmd[12126]: 2008/06/13_14:05:05 debug: fsa_dump_inputs: Added input: 0000000000000200 (R_PE_CONNECTED)
crmd[12126]: 2008/06/13_14:05:05 debug: fsa_dump_inputs: Added input: 0000000000000400 (R_TE_CONNECTED)
cib[12122]: 2008/06/13_14:05:05 info: validate_xml: Validating configuration with pacemaker-0.7: /usr/share/heartbeat/pacemaker-0.7.rng
crmd[12126]: 2008/06/13_14:05:05 debug: fsa_dump_inputs: Added input: 0000000000002000 (R_PE_REQUIRED)
cib[12122]: 2008/06/13_14:05:05 info: validate_with: Validating with: /usr/share/heartbeat/pacemaker-0.7.rng (type=2)
crmd[12126]: 2008/06/13_14:05:05 debug: register_fsa_input_adv: handle_request appended FSA input 10 (I_NULL) (cause=C_HA_MESSAGE) with data
crmd[12126]: 2008/06/13_14:05:05 debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
crmd[12126]: 2008/06/13_14:05:05 debug: do_election_count_vote: Created voted hash
crmd[12126]: 2008/06/13_14:05:05 debug: do_election_count_vote: Election 2, owner: 8029f8c4-1f03-4695-a78a-29c02fdd399c
crmd[12126]: 2008/06/13_14:05:05 info: do_election_count_vote: Election check: vote from node-a
crmd[12126]: 2008/06/13_14:05:05 debug: do_election_count_vote: Election pass: born_on
crmd[12126]: 2008/06/13_14:05:05 info: do_election_count_vote: Election won over node-a
crmd[12126]: 2008/06/13_14:05:05 debug: register_fsa_input_adv: do_election_count_vote appended FSA input 11 (I_ELECTION) (cause=C_FSA_INTERNAL) without data
crmd[12126]: 2008/06/13_14:05:05 debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
crmd[12126]: 2008/06/13_14:05:05 debug: do_election_check: Ignore election check: we not in an election
crmd[12126]: 2008/06/13_14:05:05 debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_INTEGRATION cause=C_FSA_INTERNAL origin=do_election_count_vote ]
crmd[12126]: 2008/06/13_14:05:05 info: do_state_transition: State transition S_INTEGRATION -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_election_count_vote ]
crmd[12126]: 2008/06/13_14:05:05 debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
crmd[12126]: 2008/06/13_14:05:05 debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
crmd[12126]: 2008/06/13_14:05:05 debug: crm_timer_stop: Stopping Integration Timer (I_INTEGRATED:180000ms), src=20
crmd[12126]: 2008/06/13_14:05:05 debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
crmd[12126]: 2008/06/13_14:05:05 debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
crmd[12126]: 2008/06/13_14:05:05 debug: do_election_vote: Destroying voted hash
crmd[12126]: 2008/06/13_14:05:05 debug: crm_timer_start: Started Election Timeout (I_ELECTION_DC:120000ms), src=22
crmd[12126]: 2008/06/13_14:05:05 info: te_connect_stonith: Attempting connection to fencing daemon...
cib[12133]: 2008/06/13_14:05:05 debug: archive_file: /var/lib/heartbeat/crm/cib.xml.sig archived as /var/lib/heartbeat/crm/cib.xml.sig.last
cib[12133]: 2008/06/13_14:05:05 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
cib[12133]: 2008/06/13_14:05:05 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml.last (digest: /var/lib/heartbeat/crm/cib.xml.sig.last)
cib[12133]: 2008/06/13_14:05:05 debug: write_cib_contents: Verified CIB archive
cib[12133]: 2008/06/13_14:05:05 debug: write_cib_contents: Wrote CIB to disk
cib[12122]: 2008/06/13_14:05:05 debug: activateCibXml: Triggering CIB write for cib_modify op
cib[12133]: 2008/06/13_14:05:05 info: write_cib_contents: Wrote version 0.3.1 of the CIB to disk (digest: 476d303e46caae14df58f53dded39338)
cib[12122]: 2008/06/13_14:05:05 info: log_data_element: cib:diff: - <cib epoch="3"/>
cib[12122]: 2008/06/13_14:05:05 info: log_data_element: cib:diff: + <cib epoch="4">
cib[12122]: 2008/06/13_14:05:05 info: log_data_element: cib:diff: +   <configuration>
cib[12122]: 2008/06/13_14:05:05 info: log_data_element: cib:diff: +     <crm_config>
cib[12122]: 2008/06/13_14:05:05 info: log_data_element: cib:diff: +       <cluster_property_set id="cib-bootstrap-options" __crm_diff_marker__="added:top">
cib[12122]: 2008/06/13_14:05:05 info: log_data_element: cib:diff: +         <attributes>
cib[12122]: 2008/06/13_14:05:05 info: log_data_element: cib:diff: +           <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="0.7.0-32a830e35466 tip"/>
cib[12122]: 2008/06/13_14:05:05 info: log_data_element: cib:diff: +         </attributes>
cib[12122]: 2008/06/13_14:05:05 info: log_data_element: cib:diff: +       </cluster_property_set>
cib[12122]: 2008/06/13_14:05:05 info: log_data_element: cib:diff: +     </crm_config>
cib[12122]: 2008/06/13_14:05:05 info: log_data_element: cib:diff: +   </configuration>
cib[12122]: 2008/06/13_14:05:05 info: log_data_element: cib:diff: + </cib>
cib[12122]: 2008/06/13_14:05:05 debug: send_peer_reply: Sending update diff 0.3.1 -> 0.4.1
cib[12133]: 2008/06/13_14:05:05 debug: write_cib_contents: Wrote digest to disk
cib[12133]: 2008/06/13_14:05:05 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
cib[12133]: 2008/06/13_14:05:05 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml.last (digest: /var/lib/heartbeat/crm/cib.xml.sig.last)
cib[12133]: 2008/06/13_14:05:05 debug: write_cib_contents: Wrote and verified CIB
cib[12122]: 2008/06/13_14:05:05 info: Managed write_cib_contents process 12133 exited with return code 0.
cib[12122]: 2008/06/13_14:05:05 debug: Forking temp process write_cib_contents
cib[12134]: 2008/06/13_14:05:05 debug: write_cib_contents: Archiving current version
cib[12134]: 2008/06/13_14:05:05 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
cib[12134]: 2008/06/13_14:05:05 debug: archive_file: /var/lib/heartbeat/crm/cib.xml archived as /var/lib/heartbeat/crm/cib.xml.last
cib[12134]: 2008/06/13_14:05:05 debug: archive_file: /var/lib/heartbeat/crm/cib.xml.sig archived as /var/lib/heartbeat/crm/cib.xml.sig.last
cib[12134]: 2008/06/13_14:05:05 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
cib[12134]: 2008/06/13_14:05:05 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml.last (digest: /var/lib/heartbeat/crm/cib.xml.sig.last)
cib[12134]: 2008/06/13_14:05:05 debug: write_cib_contents: Verified CIB archive
cib[12134]: 2008/06/13_14:05:05 debug: write_cib_contents: Wrote CIB to disk
cib[12134]: 2008/06/13_14:05:05 info: write_cib_contents: Wrote version 0.4.1 of the CIB to disk (digest: eeede0d8365be876883acee01392585e)
cib[12134]: 2008/06/13_14:05:05 debug: write_cib_contents: Wrote digest to disk
cib[12134]: 2008/06/13_14:05:05 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
cib[12134]: 2008/06/13_14:05:05 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml.last (digest: /var/lib/heartbeat/crm/cib.xml.sig.last)
cib[12134]: 2008/06/13_14:05:05 debug: write_cib_contents: Wrote and verified CIB
cib[12122]: 2008/06/13_14:05:05 info: Managed write_cib_contents process 12134 exited with return code 0.
crmd[12126]: 2008/06/13_14:05:06 debug: stonithd_signon: creating connection
crmd[12126]: 2008/06/13_14:05:06 debug: sending out the signon msg.
stonithd[12124]: 2008/06/13_14:05:06 debug: client tengine (pid=12126) succeeded to signon to stonithd.
crmd[12126]: 2008/06/13_14:05:06 debug: signed on to stonithd.
crmd[12126]: 2008/06/13_14:05:06 info: te_connect_stonith: Connected
crmd[12126]: 2008/06/13_14:05:06 debug: handle_request: Raising I_JOIN_OFFER: join-1
crmd[12126]: 2008/06/13_14:05:06 debug: register_fsa_input_adv: route_message appended FSA input 12 (I_JOIN_OFFER) (cause=C_HA_MESSAGE) with data
crmd[12126]: 2008/06/13_14:05:06 debug: register_fsa_input_adv: handle_request appended FSA input 13 (I_NULL) (cause=C_HA_MESSAGE) with data
crmd[12126]: 2008/06/13_14:05:06 debug: s_crmd_fsa: Processing I_JOIN_OFFER: [ state=S_ELECTION cause=C_HA_MESSAGE origin=route_message ]
crmd[12126]: 2008/06/13_14:05:06 debug: do_fsa_action: actions:trace: 	// A_WARN  
crmd[12126]: 2008/06/13_14:05:06 WARN: do_log: [[FSA]] Input I_JOIN_OFFER from route_message() received in state (S_ELECTION)
crmd[12126]: 2008/06/13_14:05:06 debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
crmd[12126]: 2008/06/13_14:05:06 debug: do_election_count_vote: Created voted hash
crmd[12126]: 2008/06/13_14:05:06 debug: do_election_count_vote: Election 3, owner: db8f2da4-a7fb-40bf-bf14-befe4af11db7
crmd[12126]: 2008/06/13_14:05:06 info: do_election_count_vote: Updated voted hash for node-b to vote
crmd[12126]: 2008/06/13_14:05:06 info: do_election_count_vote: Election ignore: our vote (node-b)
crmd[12126]: 2008/06/13_14:05:06 debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
crmd[12126]: 2008/06/13_14:05:06 info: do_election_check: Still waiting on 1 non-votes (2 total)
crmd[12126]: 2008/06/13_14:05:06 debug: register_fsa_input_adv: handle_request appended FSA input 14 (I_NULL) (cause=C_HA_MESSAGE) with data
crmd[12126]: 2008/06/13_14:05:06 debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
crmd[12126]: 2008/06/13_14:05:06 debug: do_election_count_vote: Election 3, owner: db8f2da4-a7fb-40bf-bf14-befe4af11db7
crmd[12126]: 2008/06/13_14:05:06 info: do_election_count_vote: Updated voted hash for node-a to no-vote
crmd[12126]: 2008/06/13_14:05:06 info: do_election_count_vote: Election ignore: no-vote from node-a
crmd[12126]: 2008/06/13_14:05:06 debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
crmd[12126]: 2008/06/13_14:05:06 debug: crm_timer_stop: Stopping Election Timeout (I_ELECTION_DC:120000ms), src=22
crmd[12126]: 2008/06/13_14:05:06 debug: register_fsa_input_adv: do_election_check appended FSA input 15 (I_ELECTION_DC) (cause=C_FSA_INTERNAL) without data
crmd[12126]: 2008/06/13_14:05:06 debug: do_election_check: Destroying voted hash
crmd[12126]: 2008/06/13_14:05:06 debug: s_crmd_fsa: Processing I_ELECTION_DC: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_election_check ]
crmd[12126]: 2008/06/13_14:05:06 debug: do_fsa_action: actions:trace: 	// A_LOG   
crmd[12126]: 2008/06/13_14:05:06 info: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
crmd[12126]: 2008/06/13_14:05:06 debug: do_fsa_action: actions:trace: 	// A_TE_START
crmd[12126]: 2008/06/13_14:05:06 debug: do_te_control: Internal TE is already active
crmd[12126]: 2008/06/13_14:05:06 debug: do_fsa_action: actions:trace: 	// A_PE_START
crmd[12126]: 2008/06/13_14:05:06 info: start_subsystem: Starting sub-system "pengine"
crmd[12126]: 2008/06/13_14:05:06 WARN: start_subsystem: Client pengine already running as pid 12132
crmd[12126]: 2008/06/13_14:05:11 debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/heartbeat/crm/pengine
crmd[12126]: 2008/06/13_14:05:11 debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
crmd[12126]: 2008/06/13_14:05:11 debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_START
crmd[12126]: 2008/06/13_14:05:11 debug: crm_timer_start: Started Integration Timer (I_INTEGRATED:180000ms), src=25
crmd[12126]: 2008/06/13_14:05:11 debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
crmd[12126]: 2008/06/13_14:05:11 debug: do_fsa_action: actions:trace: 	// A_DC_TAKEOVER
crmd[12126]: 2008/06/13_14:05:11 info: do_dc_takeover: Taking over DC status for this partition
cib[12122]: 2008/06/13_14:05:11 info: cib_process_readwrite: We are now in R/O mode
cib[12122]: 2008/06/13_14:05:11 info: cib_process_readwrite: We are now in R/W mode
cib[12122]: 2008/06/13_14:05:11 info: validate_xml: Validating configuration with pacemaker-0.7: /usr/share/heartbeat/pacemaker-0.7.rng
cib[12122]: 2008/06/13_14:05:11 info: validate_with: Validating with: /usr/share/heartbeat/pacemaker-0.7.rng (type=2)
cib[12122]: 2008/06/13_14:05:11 debug: log_data_element: cib:diff: - <cib num_updates="1"/>
cib[12122]: 2008/06/13_14:05:11 debug: log_data_element: cib:diff: + <cib num_updates="2"/>
cib[12122]: 2008/06/13_14:05:11 debug: send_peer_reply: Sending update diff 0.4.1 -> 0.4.2
cib[12122]: 2008/06/13_14:05:11 info: validate_xml: Validating configuration with pacemaker-0.7: /usr/share/heartbeat/pacemaker-0.7.rng
cib[12122]: 2008/06/13_14:05:11 info: validate_with: Validating with: /usr/share/heartbeat/pacemaker-0.7.rng (type=2)
cib[12122]: 2008/06/13_14:05:11 debug: log_data_element: cib:diff: - <cib num_updates="2"/>
cib[12122]: 2008/06/13_14:05:11 debug: log_data_element: cib:diff: + <cib num_updates="3"/>
cib[12122]: 2008/06/13_14:05:11 debug: send_peer_reply: Sending update diff 0.4.2 -> 0.4.3
crmd[12126]: 2008/06/13_14:05:11 debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_OFFER_ALL
cib[12122]: 2008/06/13_14:05:11 info: validate_xml: Validating configuration with pacemaker-0.7: /usr/share/heartbeat/pacemaker-0.7.rng
crmd[12126]: 2008/06/13_14:05:11 debug: initialize_join: join-2: Initializing join data (flag=true)
cib[12122]: 2008/06/13_14:05:11 info: validate_with: Validating with: /usr/share/heartbeat/pacemaker-0.7.rng (type=2)
crmd[12126]: 2008/06/13_14:05:11 debug: join_make_offer: join-2: Sending offer to node-a
crmd[12126]: 2008/06/13_14:05:11 debug: join_make_offer: join-2: Sending offer to node-b
crmd[12126]: 2008/06/13_14:05:11 info: do_dc_join_offer_all: join-2: Waiting on 2 outstanding join acks
cib[12122]: 2008/06/13_14:05:11 debug: log_data_element: cib:diff: - <cib num_updates="3"/>
cib[12122]: 2008/06/13_14:05:11 debug: log_data_element: cib:diff: + <cib num_updates="4"/>
cib[12122]: 2008/06/13_14:05:11 debug: send_peer_reply: Sending update diff 0.4.3 -> 0.4.4
crmd[12126]: 2008/06/13_14:05:12 debug: handle_request: Raising I_JOIN_OFFER: join-2
crmd[12126]: 2008/06/13_14:05:12 debug: register_fsa_input_adv: route_message appended FSA input 16 (I_JOIN_OFFER) (cause=C_HA_MESSAGE) with data
crmd[12126]: 2008/06/13_14:05:12 debug: s_crmd_fsa: Processing I_JOIN_OFFER: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
crmd[12126]: 2008/06/13_14:05:12 debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_REQUEST
crmd[12126]: 2008/06/13_14:05:12 info: update_dc: Set DC to node-b (3.0)
crmd[12126]: 2008/06/13_14:05:12 debug: do_cl_join_offer_respond: do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
crmd[12126]: 2008/06/13_14:05:12 debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
crmd[12126]: 2008/06/13_14:05:12 debug: join_query_callback: Respond to join offer join-2
crmd[12126]: 2008/06/13_14:05:12 debug: join_query_callback: Acknowledging node-b as our DC
crmd[12126]: 2008/06/13_14:05:12 debug: register_fsa_input_adv: route_message appended FSA input 17 (I_JOIN_REQUEST) (cause=C_HA_MESSAGE) with data
crmd[12126]: 2008/06/13_14:05:12 debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
crmd[12126]: 2008/06/13_14:05:12 debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_REQ
crmd[12126]: 2008/06/13_14:05:12 debug: do_dc_join_filter_offer: Processing req from node-b
crmd[12126]: 2008/06/13_14:05:12 debug: do_dc_join_filter_offer: join-2: Welcoming node node-b (ref join_request-crmd-1213333512-9)
crmd[12126]: 2008/06/13_14:05:12 debug: do_dc_join_filter_offer: 1 nodes have been integrated into join-2
crmd[12126]: 2008/06/13_14:05:12 debug: check_join_state: Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
crmd[12126]: 2008/06/13_14:05:12 debug: do_dc_join_filter_offer: join-2: Still waiting on 1 outstanding offers
crmd[12126]: 2008/06/13_14:05:13 debug: register_fsa_input_adv: route_message appended FSA input 18 (I_JOIN_REQUEST) (cause=C_HA_MESSAGE) with data
crmd[12126]: 2008/06/13_14:05:13 debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
crmd[12126]: 2008/06/13_14:05:13 debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_REQ
crmd[12126]: 2008/06/13_14:05:13 debug: do_dc_join_filter_offer: Processing req from node-a
crmd[12126]: 2008/06/13_14:05:13 debug: do_dc_join_filter_offer: join-2: Welcoming node node-a (ref join_request-crmd-1213333512-5)
crmd[12126]: 2008/06/13_14:05:13 debug: do_dc_join_filter_offer: 2 nodes have been integrated into join-2
crmd[12126]: 2008/06/13_14:05:13 debug: check_join_state: Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
crmd[12126]: 2008/06/13_14:05:13 debug: check_join_state: join-2: Integration of 2 peers complete: do_dc_join_filter_offer
crmd[12126]: 2008/06/13_14:05:13 debug: register_fsa_input_adv: check_join_state prepended FSA input 19 (I_INTEGRATED) (cause=C_FSA_INTERNAL) without data
crmd[12126]: 2008/06/13_14:05:13 debug: s_crmd_fsa: Processing I_INTEGRATED: [ state=S_INTEGRATION cause=C_FSA_INTERNAL origin=check_join_state ]
crmd[12126]: 2008/06/13_14:05:13 info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
attrd[12125]: 2008/06/13_14:05:13 info: attrd_local_callback: Sending full refresh
crmd[12126]: 2008/06/13_14:05:13 info: do_state_transition: All 2 cluster nodes responded to the join offer.
crmd[12126]: 2008/06/13_14:05:13 debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
crmd[12126]: 2008/06/13_14:05:13 debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
crmd[12126]: 2008/06/13_14:05:13 debug: crm_timer_stop: Stopping Integration Timer (I_INTEGRATED:180000ms), src=25
crmd[12126]: 2008/06/13_14:05:13 debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_START
crmd[12126]: 2008/06/13_14:05:13 debug: crm_timer_start: Started Finalization Timer (I_ELECTION:1800000ms), src=28
crmd[12126]: 2008/06/13_14:05:13 debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_FINALIZE
crmd[12126]: 2008/06/13_14:05:13 debug: do_dc_join_finalize: Finializing join-2 for 2 clients
crmd[12126]: 2008/06/13_14:05:13 info: update_attrd: Connecting to attrd...
crmd[12126]: 2008/06/13_14:05:13 debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/heartbeat/crm/attrd
crmd[12126]: 2008/06/13_14:05:13 debug: update_attrd: sent attrd refresh
crmd[12126]: 2008/06/13_14:05:13 debug: check_join_state: Invoked by do_dc_join_finalize in state: S_FINALIZE_JOIN
cib[12122]: 2008/06/13_14:05:13 info: sync_our_cib: Syncing CIB to all peers
crmd[12126]: 2008/06/13_14:05:13 debug: check_join_state: join-2: Still waiting on 2 integrated nodes
crmd[12126]: 2008/06/13_14:05:13 debug: finalize_join: Notifying 2 clients of join-2 results
crmd[12126]: 2008/06/13_14:05:13 debug: finalize_join_for: join-2: ACK'ing join request from node-a, state member
crmd[12126]: 2008/06/13_14:05:13 debug: finalize_join_for: join-2: ACK'ing join request from node-b, state member
crmd[12126]: 2008/06/13_14:05:13 debug: fsa_dump_inputs: Added input: 0000000000020000 (R_HAVE_CIB)
cib[12122]: 2008/06/13_14:05:13 info: validate_xml: Validating configuration with pacemaker-0.7: /usr/share/heartbeat/pacemaker-0.7.rng
cib[12122]: 2008/06/13_14:05:13 info: validate_with: Validating with: /usr/share/heartbeat/pacemaker-0.7.rng (type=2)
cib[12122]: 2008/06/13_14:05:13 debug: activateCibXml: Triggering CIB write for cib_modify op
cib[12122]: 2008/06/13_14:05:13 info: log_data_element: cib:diff: - <cib epoch="4" num_updates="4" dc-uuid="0"/>
cib[12122]: 2008/06/13_14:05:13 info: log_data_element: cib:diff: + <cib epoch="5" num_updates="1" dc-uuid="db8f2da4-a7fb-40bf-bf14-befe4af11db7"/>
cib[12122]: 2008/06/13_14:05:13 debug: send_peer_reply: Sending update diff 0.4.4 -> 0.5.1
cib[12122]: 2008/06/13_14:05:13 info: validate_xml: Validating configuration with pacemaker-0.7: /usr/share/heartbeat/pacemaker-0.7.rng
cib[12122]: 2008/06/13_14:05:13 info: validate_with: Validating with: /usr/share/heartbeat/pacemaker-0.7.rng (type=2)
cib[12122]: 2008/06/13_14:05:13 debug: activateCibXml: Triggering CIB write for cib_modify op
cib[12122]: 2008/06/13_14:05:13 info: log_data_element: cib:diff: - <cib epoch="5"/>
cib[12122]: 2008/06/13_14:05:13 info: log_data_element: cib:diff: + <cib epoch="6">
cib[12122]: 2008/06/13_14:05:13 info: log_data_element: cib:diff: +   <configuration>
cib[12122]: 2008/06/13_14:05:13 info: log_data_element: cib:diff: +     <nodes>
cib[12122]: 2008/06/13_14:05:13 info: log_data_element: cib:diff: +       <node id="8029f8c4-1f03-4695-a78a-29c02fdd399c" uname="node-a" type="normal" __crm_diff_marker__="added:top"/>
cib[12122]: 2008/06/13_14:05:13 info: log_data_element: cib:diff: +     </nodes>
cib[12122]: 2008/06/13_14:05:13 info: log_data_element: cib:diff: +   </configuration>
cib[12122]: 2008/06/13_14:05:13 info: log_data_element: cib:diff: + </cib>
cib[12122]: 2008/06/13_14:05:13 debug: send_peer_reply: Sending update diff 0.5.1 -> 0.6.1
cib[12122]: 2008/06/13_14:05:13 info: validate_xml: Validating configuration with pacemaker-0.7: /usr/share/heartbeat/pacemaker-0.7.rng
cib[12122]: 2008/06/13_14:05:13 info: validate_with: Validating with: /usr/share/heartbeat/pacemaker-0.7.rng (type=2)
crmd[12126]: 2008/06/13_14:05:13 debug: handle_request: Raising I_JOIN_RESULT: join-2
crmd[12126]: 2008/06/13_14:05:13 debug: register_fsa_input_adv: route_message appended FSA input 20 (I_JOIN_RESULT) (cause=C_HA_MESSAGE) with data
crmd[12126]: 2008/06/13_14:05:13 debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
crmd[12126]: 2008/06/13_14:05:13 debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
crmd[12126]: 2008/06/13_14:05:13 info: update_dc: Set DC to node-b (3.0)
crmd[12126]: 2008/06/13_14:05:13 debug: do_cl_join_finalize_respond: Confirming join join-2: join_ack_nack
crmd[12126]: 2008/06/13_14:05:13 debug: do_cl_join_finalize_respond: join-2: Join complete.  Sending local LRM status to node-b
crmd[12126]: 2008/06/13_14:05:13 debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
crmd[12126]: 2008/06/13_14:05:13 debug: do_dc_join_ack: Ignoring op=join_ack_nack message from node-b
cib[12122]: 2008/06/13_14:05:13 debug: activateCibXml: Triggering CIB write for cib_modify op
cib[12122]: 2008/06/13_14:05:13 info: log_data_element: cib:diff: - <cib epoch="6"/>
cib[12122]: 2008/06/13_14:05:13 info: log_data_element: cib:diff: + <cib epoch="7">
cib[12122]: 2008/06/13_14:05:13 info: log_data_element: cib:diff: +   <configuration>
cib[12122]: 2008/06/13_14:05:13 info: log_data_element: cib:diff: +     <nodes>
cib[12122]: 2008/06/13_14:05:13 info: log_data_element: cib:diff: +       <node id="db8f2da4-a7fb-40bf-bf14-befe4af11db7" uname="node-b" type="normal" __crm_diff_marker__="added:top"/>
cib[12122]: 2008/06/13_14:05:13 info: log_data_element: cib:diff: +     </nodes>
cib[12122]: 2008/06/13_14:05:13 info: log_data_element: cib:diff: +   </configuration>
cib[12122]: 2008/06/13_14:05:13 info: log_data_element: cib:diff: + </cib>
cib[12122]: 2008/06/13_14:05:13 debug: send_peer_reply: Sending update diff 0.6.1 -> 0.7.1
cib[12122]: 2008/06/13_14:05:13 debug: Forking temp process write_cib_contents
cib[12135]: 2008/06/13_14:05:13 debug: write_cib_contents: Archiving current version
cib[12135]: 2008/06/13_14:05:13 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
cib[12135]: 2008/06/13_14:05:13 debug: archive_file: /var/lib/heartbeat/crm/cib.xml archived as /var/lib/heartbeat/crm/cib.xml.last
cib[12135]: 2008/06/13_14:05:13 debug: archive_file: /var/lib/heartbeat/crm/cib.xml.sig archived as /var/lib/heartbeat/crm/cib.xml.sig.last
cib[12135]: 2008/06/13_14:05:13 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
cib[12135]: 2008/06/13_14:05:13 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml.last (digest: /var/lib/heartbeat/crm/cib.xml.sig.last)
cib[12135]: 2008/06/13_14:05:13 debug: write_cib_contents: Verified CIB archive
cib[12135]: 2008/06/13_14:05:13 debug: write_cib_contents: Wrote CIB to disk
cib[12135]: 2008/06/13_14:05:13 info: write_cib_contents: Wrote version 0.7.1 of the CIB to disk (digest: 56dc4a238be06d2ada9c19d46fbb0a97)
cib[12135]: 2008/06/13_14:05:13 debug: write_cib_contents: Wrote digest to disk
cib[12135]: 2008/06/13_14:05:13 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
cib[12135]: 2008/06/13_14:05:13 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml.last (digest: /var/lib/heartbeat/crm/cib.xml.sig.last)
cib[12135]: 2008/06/13_14:05:13 debug: write_cib_contents: Wrote and verified CIB
cib[12122]: 2008/06/13_14:05:13 info: Managed write_cib_contents process 12135 exited with return code 0.
crmd[12126]: 2008/06/13_14:05:13 debug: register_fsa_input_adv: route_message appended FSA input 21 (I_JOIN_RESULT) (cause=C_HA_MESSAGE) with data
crmd[12126]: 2008/06/13_14:05:13 debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
crmd[12126]: 2008/06/13_14:05:13 debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
crmd[12126]: 2008/06/13_14:05:13 debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
crmd[12126]: 2008/06/13_14:05:13 info: do_dc_join_ack: join-2: Updating node state to member for node-b
crmd[12126]: 2008/06/13_14:05:13 debug: erase_status_tag: Erasing //node_state[@uname="node-b"]/lrm
crmd[12126]: 2008/06/13_14:05:13 debug: do_dc_join_ack: join-2: Registered callback for LRM update 23
cib[12122]: 2008/06/13_14:05:13 debug: cib_process_xpath: //node_state[@uname="node-b"]/lrm was already removed
cib[12122]: 2008/06/13_14:05:13 info: validate_xml: Validating configuration with pacemaker-0.7: /usr/share/heartbeat/pacemaker-0.7.rng
cib[12122]: 2008/06/13_14:05:13 info: validate_with: Validating with: /usr/share/heartbeat/pacemaker-0.7.rng (type=2)
cib[12122]: 2008/06/13_14:05:13 debug: log_data_element: cib:diff: - <cib num_updates="1"/>
cib[12122]: 2008/06/13_14:05:13 debug: log_data_element: cib:diff: + <cib num_updates="2"/>
cib[12122]: 2008/06/13_14:05:13 debug: send_peer_reply: Sending update diff 0.7.1 -> 0.7.2
cib[12122]: 2008/06/13_14:05:13 info: validate_xml: Validating configuration with pacemaker-0.7: /usr/share/heartbeat/pacemaker-0.7.rng
cib[12122]: 2008/06/13_14:05:13 info: validate_with: Validating with: /usr/share/heartbeat/pacemaker-0.7.rng (type=2)
cib[12122]: 2008/06/13_14:05:13 debug: log_data_element: cib:diff: - <cib num_updates="2"/>
cib[12122]: 2008/06/13_14:05:13 debug: log_data_element: cib:diff: + <cib num_updates="3">
cib[12122]: 2008/06/13_14:05:13 debug: log_data_element: cib:diff: +   <status>
cib[12122]: 2008/06/13_14:05:13 debug: log_data_element: cib:diff: +     <node_state id="db8f2da4-a7fb-40bf-bf14-befe4af11db7" uname="node-b" ha="active" in_ccm="true" crmd="online" join="member" expected="member" crm-debug-origin="do_lrm_query" shutdown="0" __crm_diff_marker__="added:top">
cib[12122]: 2008/06/13_14:05:13 debug: log_data_element: cib:diff: +       <lrm id="db8f2da4-a7fb-40bf-bf14-befe4af11db7">
cib[12122]: 2008/06/13_14:05:13 debug: log_data_element: cib:diff: +         <lrm_resources/>
cib[12122]: 2008/06/13_14:05:13 debug: log_data_element: cib:diff: +       </lrm>
cib[12122]: 2008/06/13_14:05:13 debug: log_data_element: cib:diff: +     </node_state>
cib[12122]: 2008/06/13_14:05:13 debug: log_data_element: cib:diff: +   </status>
cib[12122]: 2008/06/13_14:05:13 debug: log_data_element: cib:diff: + </cib>
cib[12122]: 2008/06/13_14:05:13 debug: send_peer_reply: Sending update diff 0.7.2 -> 0.7.3
crmd[12126]: 2008/06/13_14:05:13 debug: join_update_complete_callback: Join update 23 complete
crmd[12126]: 2008/06/13_14:05:13 debug: check_join_state: Invoked by join_update_complete_callback in state: S_FINALIZE_JOIN
crmd[12126]: 2008/06/13_14:05:13 debug: check_join_state: join-2: Still waiting on 1 finalized nodes
crmd[12126]: 2008/06/13_14:05:14 debug: register_fsa_input_adv: route_message appended FSA input 22 (I_JOIN_RESULT) (cause=C_HA_MESSAGE) with data
cib[12122]: 2008/06/13_14:05:14 debug: cib_process_xpath: //node_state[@uname="node-a"]/lrm was already removed
crmd[12126]: 2008/06/13_14:05:14 debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
cib[12122]: 2008/06/13_14:05:14 info: validate_xml: Validating configuration with pacemaker-0.7: /usr/share/heartbeat/pacemaker-0.7.rng
crmd[12126]: 2008/06/13_14:05:14 debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
cib[12122]: 2008/06/13_14:05:14 info: validate_with: Validating with: /usr/share/heartbeat/pacemaker-0.7.rng (type=2)
crmd[12126]: 2008/06/13_14:05:14 debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
crmd[12126]: 2008/06/13_14:05:14 info: do_dc_join_ack: join-2: Updating node state to member for node-a
crmd[12126]: 2008/06/13_14:05:14 debug: erase_status_tag: Erasing //node_state[@uname="node-a"]/lrm
crmd[12126]: 2008/06/13_14:05:14 debug: do_dc_join_ack: join-2: Registered callback for LRM update 25
cib[12122]: 2008/06/13_14:05:14 debug: log_data_element: cib:diff: - <cib num_updates="3"/>
cib[12122]: 2008/06/13_14:05:14 debug: log_data_element: cib:diff: + <cib num_updates="4"/>
cib[12122]: 2008/06/13_14:05:14 debug: send_peer_reply: Sending update diff 0.7.3 -> 0.7.4
cib[12122]: 2008/06/13_14:05:14 info: validate_xml: Validating configuration with pacemaker-0.7: /usr/share/heartbeat/pacemaker-0.7.rng
cib[12122]: 2008/06/13_14:05:14 info: validate_with: Validating with: /usr/share/heartbeat/pacemaker-0.7.rng (type=2)
crmd[12126]: 2008/06/13_14:05:14 debug: join_update_complete_callback: Join update 25 complete
cib[12122]: 2008/06/13_14:05:14 debug: log_data_element: cib:diff: - <cib num_updates="4"/>
crmd[12126]: 2008/06/13_14:05:14 debug: check_join_state: Invoked by join_update_complete_callback in state: S_FINALIZE_JOIN
cib[12122]: 2008/06/13_14:05:14 debug: log_data_element: cib:diff: + <cib num_updates="5">
crmd[12126]: 2008/06/13_14:05:14 debug: check_join_state: join-2 complete: join_update_complete_callback
cib[12122]: 2008/06/13_14:05:14 debug: log_data_element: cib:diff: +   <status>
crmd[12126]: 2008/06/13_14:05:14 debug: register_fsa_input_adv: check_join_state appended FSA input 23 (I_FINALIZED) (cause=C_FSA_INTERNAL) without data
cib[12122]: 2008/06/13_14:05:14 debug: log_data_element: cib:diff: +     <node_state id="8029f8c4-1f03-4695-a78a-29c02fdd399c" uname="node-a" ha="active" in_ccm="true" crmd="online" join="member" expected="member" crm-debug-origin="do_lrm_query" shutdown="0" __crm_diff_marker__="added:top">
crmd[12126]: 2008/06/13_14:05:14 debug: s_crmd_fsa: Processing I_FINALIZED: [ state=S_FINALIZE_JOIN cause=C_FSA_INTERNAL origin=check_join_state ]
cib[12122]: 2008/06/13_14:05:14 debug: log_data_element: cib:diff: +       <lrm id="8029f8c4-1f03-4695-a78a-29c02fdd399c">
crmd[12126]: 2008/06/13_14:05:14 info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
cib[12122]: 2008/06/13_14:05:14 debug: log_data_element: cib:diff: +         <lrm_resources/>
crmd[12126]: 2008/06/13_14:05:14 info: populate_cib_nodes_ha: Requesting the list of configured nodes
cib[12122]: 2008/06/13_14:05:14 debug: log_data_element: cib:diff: +       </lrm>
cib[12122]: 2008/06/13_14:05:14 debug: log_data_element: cib:diff: +     </node_state>
cib[12122]: 2008/06/13_14:05:14 debug: log_data_element: cib:diff: +   </status>
cib[12122]: 2008/06/13_14:05:14 debug: log_data_element: cib:diff: + </cib>
cib[12122]: 2008/06/13_14:05:14 debug: send_peer_reply: Sending update diff 0.7.4 -> 0.7.5
crmd[12126]: 2008/06/13_14:05:15 notice: populate_cib_nodes_ha: Node: node-b (uuid: db8f2da4-a7fb-40bf-bf14-befe4af11db7)
crmd[12126]: 2008/06/13_14:05:15 notice: populate_cib_nodes_ha: Node: node-a (uuid: 8029f8c4-1f03-4695-a78a-29c02fdd399c)
crmd[12126]: 2008/06/13_14:05:15 debug: ghash_update_cib_node: Updating node-a: true (overwrite=true) hash_size=2
cib[12122]: 2008/06/13_14:05:15 info: validate_xml: Validating configuration with pacemaker-0.7: /usr/share/heartbeat/pacemaker-0.7.rng
crmd[12126]: 2008/06/13_14:05:15 debug: ghash_update_cib_node: Updating node-b: true (overwrite=true) hash_size=2
cib[12122]: 2008/06/13_14:05:15 info: validate_with: Validating with: /usr/share/heartbeat/pacemaker-0.7.rng (type=2)
crmd[12126]: 2008/06/13_14:05:15 info: do_state_transition: All 2 cluster nodes are eligible to run resources.
crmd[12126]: 2008/06/13_14:05:15 debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
crmd[12126]: 2008/06/13_14:05:15 debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
crmd[12126]: 2008/06/13_14:05:15 debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
crmd[12126]: 2008/06/13_14:05:15 debug: crm_timer_stop: Stopping Finalization Timer (I_ELECTION:1800000ms), src=28
crmd[12126]: 2008/06/13_14:05:15 debug: do_fsa_action: actions:trace: 	// A_TE_CANCEL
crmd[12126]: 2008/06/13_14:05:15 debug: do_te_invoke: Cancelling the active Transition
crmd[12126]: 2008/06/13_14:05:15 debug: abort_transition_graph: do_te_invoke:189 - Triggered graph processing (complete=1) : Peer Cancelled
crmd[12126]: 2008/06/13_14:05:15 debug: register_fsa_input_adv: abort_transition_graph appended FSA input 24 (I_PE_CALC) (cause=C_FSA_INTERNAL) without data
crmd[12126]: 2008/06/13_14:05:15 debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
crmd[12126]: 2008/06/13_14:05:15 debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
crmd[12126]: 2008/06/13_14:05:15 debug: do_pe_invoke: Requesting the current CIB: S_POLICY_ENGINE
crmd[12126]: 2008/06/13_14:05:15 debug: te_update_diff: Processing diff (cib_modify): 0.7.5 -> 0.7.6 (S_POLICY_ENGINE)
cib[12122]: 2008/06/13_14:05:15 debug: log_data_element: cib:diff: - <cib num_updates="5"/>
cib[12122]: 2008/06/13_14:05:15 debug: log_data_element: cib:diff: + <cib num_updates="6"/>
cib[12122]: 2008/06/13_14:05:15 debug: send_peer_reply: Sending update diff 0.7.5 -> 0.7.6
cib[12122]: 2008/06/13_14:05:15 info: validate_xml: Validating configuration with pacemaker-0.7: /usr/share/heartbeat/pacemaker-0.7.rng
cib[12122]: 2008/06/13_14:05:15 info: validate_with: Validating with: /usr/share/heartbeat/pacemaker-0.7.rng (type=2)
crmd[12126]: 2008/06/13_14:05:15 debug: te_update_diff: Processing diff (cib_modify): 0.7.6 -> 0.7.7 (S_POLICY_ENGINE)
cib[12122]: 2008/06/13_14:05:15 debug: log_data_element: cib:diff: - <cib num_updates="6"/>
crmd[12126]: 2008/06/13_14:05:15 debug: ccm_node_update_complete: Node update 27 complete
cib[12122]: 2008/06/13_14:05:15 debug: log_data_element: cib:diff: + <cib num_updates="7"/>
cib[12122]: 2008/06/13_14:05:15 debug: send_peer_reply: Sending update diff 0.7.6 -> 0.7.7
crmd[12126]: 2008/06/13_14:05:15 debug: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1213333515-13, seq=2, quorate=1
pengine[12132]: 2008/06/13_14:05:15 debug: unpack_config: Default action timeout: 20s
pengine[12132]: 2008/06/13_14:05:15 debug: unpack_config: Default stickiness: 0
pengine[12132]: 2008/06/13_14:05:15 debug: unpack_config: Stop all active resources: false
pengine[12132]: 2008/06/13_14:05:15 debug: unpack_config: Default failure timeout: 0
pengine[12132]: 2008/06/13_14:05:15 debug: unpack_config: Default migration threshold: 0
pengine[12132]: 2008/06/13_14:05:15 debug: unpack_config: STONITH of failed nodes is disabled
pengine[12132]: 2008/06/13_14:05:15 debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
pengine[12132]: 2008/06/13_14:05:15 debug: unpack_config: On loss of CCM Quorum: Stop ALL resources
pengine[12132]: 2008/06/13_14:05:15 info: determine_online_status: Node node-b is online
pengine[12132]: 2008/06/13_14:05:15 info: determine_online_status: Node node-a is online
pengine[12132]: 2008/06/13_14:05:15 debug: get_last_sequence: Series file /var/lib/heartbeat/pengine/pe-input.last does not exist
crmd[12126]: 2008/06/13_14:05:15 debug: register_fsa_input_adv: route_message appended FSA input 25 (I_PE_SUCCESS) (cause=C_IPC_MESSAGE) with data
crmd[12126]: 2008/06/13_14:05:15 debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=route_message ]
crmd[12126]: 2008/06/13_14:05:15 debug: do_fsa_action: actions:trace: 	// A_LOG   
crmd[12126]: 2008/06/13_14:05:15 info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=route_message ]
crmd[12126]: 2008/06/13_14:05:15 debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
crmd[12126]: 2008/06/13_14:05:15 debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
crmd[12126]: 2008/06/13_14:05:15 debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
crmd[12126]: 2008/06/13_14:05:15 debug: do_fsa_action: actions:trace: 	// A_TE_INVOKE
crmd[12126]: 2008/06/13_14:05:15 debug: stop_te_timer: Stopping global timer
crmd[12126]: 2008/06/13_14:05:15 debug: stop_te_timer: global timer was already stopped
crmd[12126]: 2008/06/13_14:05:15 info: unpack_graph: Unpacked transition 0: 2 actions in 2 synapses
crmd[12126]: 2008/06/13_14:05:15 info: do_te_invoke: Processing graph 0 derived from /var/lib/heartbeat/pengine/pe-input-0.bz2
crmd[12126]: 2008/06/13_14:05:15 debug: start_global_timer: Starting abort timer: 60000ms
crmd[12126]: 2008/06/13_14:05:15 debug: te_graph_trigger: Invoking the TE graph in state S_TRANSITION_ENGINE
crmd[12126]: 2008/06/13_14:05:15 info: send_rsc_command: Initiating action 2: probe_complete probe_complete on node-a
crmd[12126]: 2008/06/13_14:05:15 debug: send_rsc_command: Skipping wait for 2
crmd[12126]: 2008/06/13_14:05:15 info: send_rsc_command: Initiating action 3: probe_complete probe_complete on node-b
crmd[12126]: 2008/06/13_14:05:15 debug: send_rsc_command: Skipping wait for 3
crmd[12126]: 2008/06/13_14:05:15 debug: run_graph: Transition 0: (Complete=0, Pending=0, Fired=2, Skipped=0, Incomplete=0)
crmd[12126]: 2008/06/13_14:05:15 debug: te_graph_trigger: Restarting TE timer
crmd[12126]: 2008/06/13_14:05:15 debug: stop_te_timer: Stopping global timer
crmd[12126]: 2008/06/13_14:05:15 debug: start_global_timer: Starting abort timer: 60000ms
crmd[12126]: 2008/06/13_14:05:15 debug: te_graph_trigger: Invoking the TE graph in state S_TRANSITION_ENGINE
crmd[12126]: 2008/06/13_14:05:15 debug: run_graph: ====================================================
crmd[12126]: 2008/06/13_14:05:15 info: run_graph: Transition 0: (Complete=2, Pending=0, Fired=0, Skipped=0, Incomplete=0)
crmd[12126]: 2008/06/13_14:05:15 debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE
crmd[12126]: 2008/06/13_14:05:15 debug: stop_te_timer: Stopping global timer
crmd[12126]: 2008/06/13_14:05:15 info: notify_crmd: Transition 0 status: done - <null>
crmd[12126]: 2008/06/13_14:05:15 debug: register_fsa_input_adv: notify_crmd appended FSA input 26 (I_TE_SUCCESS) (cause=C_FSA_INTERNAL) without data
crmd[12126]: 2008/06/13_14:05:15 debug: s_crmd_fsa: Processing I_TE_SUCCESS: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
crmd[12126]: 2008/06/13_14:05:15 debug: do_fsa_action: actions:trace: 	// A_LOG   
crmd[12126]: 2008/06/13_14:05:15 info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
crmd[12126]: 2008/06/13_14:05:15 debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
crmd[12126]: 2008/06/13_14:05:15 debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
crmd[12126]: 2008/06/13_14:05:15 debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
pengine[12132]: 2008/06/13_14:05:15 info: process_pe_message: Transition 0: PEngine Input stored in: /var/lib/heartbeat/pengine/pe-input-0.bz2
cib[12122]: 2008/06/13_14:05:15 info: validate_xml: Validating configuration with pacemaker-0.7: /usr/share/heartbeat/pacemaker-0.7.rng
cib[12122]: 2008/06/13_14:05:15 info: validate_with: Validating with: /usr/share/heartbeat/pacemaker-0.7.rng (type=2)
cib[12122]: 2008/06/13_14:05:15 debug: log_data_element: cib:diff: - <cib num_updates="7"/>
cib[12122]: 2008/06/13_14:05:15 debug: log_data_element: cib:diff: + <cib num_updates="8">
cib[12122]: 2008/06/13_14:05:15 debug: log_data_element: cib:diff: +   <status>
cib[12122]: 2008/06/13_14:05:15 debug: log_data_element: cib:diff: +     <node_state id="db8f2da4-a7fb-40bf-bf14-befe4af11db7">
cib[12122]: 2008/06/13_14:05:15 debug: log_data_element: cib:diff: +       <transient_attributes id="db8f2da4-a7fb-40bf-bf14-befe4af11db7" __crm_diff_marker__="added:top">
cib[12122]: 2008/06/13_14:05:15 debug: log_data_element: cib:diff: +         <instance_attributes id="status-db8f2da4-a7fb-40bf-bf14-befe4af11db7">
cib[12122]: 2008/06/13_14:05:15 debug: log_data_element: cib:diff: +           <attributes>
cib[12122]: 2008/06/13_14:05:15 debug: log_data_element: cib:diff: +             <nvpair id="status-db8f2da4-a7fb-40bf-bf14-befe4af11db7-probe_complete" name="probe_complete" value="true"/>
cib[12122]: 2008/06/13_14:05:15 debug: log_data_element: cib:diff: +           </attributes>
cib[12122]: 2008/06/13_14:05:15 debug: log_data_element: cib:diff: +         </instance_attributes>
cib[12122]: 2008/06/13_14:05:15 debug: log_data_element: cib:diff: +       </transient_attributes>
cib[12122]: 2008/06/13_14:05:15 debug: log_data_element: cib:diff: +     </node_state>
cib[12122]: 2008/06/13_14:05:15 debug: log_data_element: cib:diff: +   </status>
cib[12122]: 2008/06/13_14:05:15 debug: log_data_element: cib:diff: + </cib>
cib[12122]: 2008/06/13_14:05:15 debug: send_peer_reply: Sending update diff 0.7.7 -> 0.7.8
cib[12122]: 2008/06/13_14:05:16 info: validate_xml: Validating configuration with pacemaker-0.7: /usr/share/heartbeat/pacemaker-0.7.rng
cib[12122]: 2008/06/13_14:05:16 info: validate_with: Validating with: /usr/share/heartbeat/pacemaker-0.7.rng (type=2)
cib[12122]: 2008/06/13_14:05:16 debug: log_data_element: cib:diff: - <cib num_updates="8"/>
cib[12122]: 2008/06/13_14:05:16 debug: log_data_element: cib:diff: + <cib num_updates="9">
cib[12122]: 2008/06/13_14:05:16 debug: log_data_element: cib:diff: +   <status>
cib[12122]: 2008/06/13_14:05:16 debug: log_data_element: cib:diff: +     <node_state id="8029f8c4-1f03-4695-a78a-29c02fdd399c">
cib[12122]: 2008/06/13_14:05:16 debug: log_data_element: cib:diff: +       <transient_attributes id="8029f8c4-1f03-4695-a78a-29c02fdd399c" __crm_diff_marker__="added:top">
cib[12122]: 2008/06/13_14:05:16 debug: log_data_element: cib:diff: +         <instance_attributes id="status-8029f8c4-1f03-4695-a78a-29c02fdd399c">
cib[12122]: 2008/06/13_14:05:16 debug: log_data_element: cib:diff: +           <attributes>
cib[12122]: 2008/06/13_14:05:16 debug: log_data_element: cib:diff: +             <nvpair id="status-8029f8c4-1f03-4695-a78a-29c02fdd399c-probe_complete" name="probe_complete" value="true"/>
cib[12122]: 2008/06/13_14:05:16 debug: log_data_element: cib:diff: +           </attributes>
cib[12122]: 2008/06/13_14:05:16 debug: log_data_element: cib:diff: +         </instance_attributes>
cib[12122]: 2008/06/13_14:05:16 debug: log_data_element: cib:diff: +       </transient_attributes>
cib[12122]: 2008/06/13_14:05:16 debug: log_data_element: cib:diff: +     </node_state>
cib[12122]: 2008/06/13_14:05:16 debug: log_data_element: cib:diff: +   </status>
cib[12122]: 2008/06/13_14:05:16 debug: log_data_element: cib:diff: + </cib>
cib[12122]: 2008/06/13_14:05:16 debug: send_peer_reply: Sending update diff 0.7.8 -> 0.7.9
cib[12122]: 2008/06/13_14:05:34 info: validate_xml: Validating configuration with transitional-0.6: /usr/share/heartbeat/crm-transitional.dtd
cib[12122]: 2008/06/13_14:05:34 info: validate_with: Validating with: /usr/share/heartbeat/crm-transitional.dtd (type=1)
cib[12122]: 2008/06/13_14:05:34 debug: activateCibXml: Triggering CIB write for cib_modify op
crmd[12126]: 2008/06/13_14:05:34 debug: te_update_diff: Processing diff (cib_modify): 0.7.9 -> 0.8.1 (S_IDLE)
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: - <cib epoch="7" num_updates="9" validate-with="pacemaker-0.7"/>
crmd[12126]: 2008/06/13_14:05:34 debug: need_abort: Aborting on change to epoch
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: + <cib epoch="8" num_updates="1" validate-with="transitional-0.6">
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs <cib epoch="8" num_updates="1" validate-with="transitional-0.6">
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +   <configuration>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs   <configuration>
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +     <crm_config>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs     <crm_config>
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +       <cluster_property_set id="cib-bootstrap-options">
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs       <cluster_property_set id="cib-bootstrap-options">
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +         <attributes>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs         <attributes>
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +           <nvpair id="cib-bootstrap-options-no-quorum-policy" name="no-quorum-policy" value="ignore" __crm_diff_marker__="added:top"/>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs           <nvpair id="cib-bootstrap-options-no-quorum-policy" name="no-quorum-policy" value="ignore" __crm_diff_marker__="added:top"/>
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +           <nvpair id="cib-bootstrap-options-stonith-enabled" name="stonith-enabled" value="true" __crm_diff_marker__="added:top"/>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs           <nvpair id="cib-bootstrap-options-stonith-enabled" name="stonith-enabled" value="true" __crm_diff_marker__="added:top"/>
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +           <nvpair id="cib-bootstrap-options-startup-fencing" name="startup-fencing" value="false" __crm_diff_marker__="added:top"/>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs           <nvpair id="cib-bootstrap-options-startup-fencing" name="startup-fencing" value="false" __crm_diff_marker__="added:top"/>
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +           <nvpair id="cib-bootstrap-options-start-failure-is-fatal" name="start-failure-is-fatal" value="false" __crm_diff_marker__="added:top"/>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs           <nvpair id="cib-bootstrap-options-start-failure-is-fatal" name="start-failure-is-fatal" value="false" __crm_diff_marker__="added:top"/>
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +           <nvpair id="cib-bootstrap-options-default-resource-stickiness" name="default-resource-stickiness" value="INFINITY" __crm_diff_marker__="added:top"/>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs           <nvpair id="cib-bootstrap-options-default-resource-stickiness" name="default-resource-stickiness" value="INFINITY" __crm_diff_marker__="added:top"/>
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +           <nvpair id="cib-bootstrap-options-default-migration-threshold" name="default-migration-threshold" value="1" __crm_diff_marker__="added:top"/>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs           <nvpair id="cib-bootstrap-options-default-migration-threshold" name="default-migration-threshold" value="1" __crm_diff_marker__="added:top"/>
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +           <nvpair id="cib-bootstrap-options-default-action-timeout" name="default-action-timeout" value="120s" __crm_diff_marker__="added:top"/>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs           <nvpair id="cib-bootstrap-options-default-action-timeout" name="default-action-timeout" value="120s" __crm_diff_marker__="added:top"/>
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +         </attributes>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs         </attributes>
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +       </cluster_property_set>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs       </cluster_property_set>
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +     </crm_config>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs     </crm_config>
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +     <resources>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs     <resources>
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +       <clone id="stonith1" globally_unique="false" __crm_diff_marker__="added:top">
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs       <clone id="stonith1" globally_unique="false" __crm_diff_marker__="added:top">
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +         <instance_attributes id="clone-attrs">
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs         <instance_attributes id="clone-attrs">
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +           <attributes>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs           <attributes>
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +             <nvpair id="stonith1-clone_max" name="clone_max" value="2"/>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs             <nvpair id="stonith1-clone_max" name="clone_max" value="2"/>
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +             <nvpair id="stontih1-clone_node_max" name="clone_node_max" value="1"/>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs             <nvpair id="stontih1-clone_node_max" name="clone_node_max" value="1"/>
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +           </attributes>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs           </attributes>
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +         </instance_attributes>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs         </instance_attributes>
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +         <primitive id="stonith1-ssh" class="stonith" type="external/ssh" provider="heartbeat">
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs         <primitive id="stonith1-ssh" class="stonith" type="external/ssh" provider="heartbeat">
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +           <operations>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs           <operations>
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +             <op id="stonith1-start" name="start" timeout="30s"/>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs             <op id="stonith1-start" name="start" timeout="30s"/>
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +             <op id="stonith1-monitor" name="monitor" timeout="30s" interval="10s"/>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs             <op id="stonith1-monitor" name="monitor" timeout="30s" interval="10s"/>
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +           </operations>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs           </operations>
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +           <instance_attributes id="stonith1-attrs">
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs           <instance_attributes id="stonith1-attrs">
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +             <attributes>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs             <attributes>
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +               <nvpair id="stonith1-hostlist" name="hostlist" value="node-a node-b"/>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs               <nvpair id="stonith1-hostlist" name="hostlist" value="node-a node-b"/>
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +             </attributes>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs             </attributes>
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +           </instance_attributes>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs           </instance_attributes>
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +         </primitive>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs         </primitive>
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +       </clone>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs       </clone>
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +       <primitive id="dummy" class="ocf" type="Dummy" provider="heartbeat" __crm_diff_marker__="added:top">
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs       <primitive id="dummy" class="ocf" type="Dummy" provider="heartbeat" __crm_diff_marker__="added:top">
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +         <operations>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs         <operations>
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +           <op id="start" name="start" timeout="30s" on_fail="restart"/>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs           <op id="start" name="start" timeout="30s" on_fail="restart"/>
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +           <op id="monitor" name="monitor" timeout="30s" on_fail="restart" interval="10s"/>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs           <op id="monitor" name="monitor" timeout="30s" on_fail="restart" interval="10s"/>
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +           <op id="stop" name="stop" timeout="30s" on_fail="fence"/>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs           <op id="stop" name="stop" timeout="30s" on_fail="fence"/>
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +         </operations>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs         </operations>
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +       </primitive>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs       </primitive>
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +     </resources>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs     </resources>
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +     <constraints>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs     <constraints>
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +       <rsc_location id="rsc_location" rsc="dummy" __crm_diff_marker__="added:top">
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs       <rsc_location id="rsc_location" rsc="dummy" __crm_diff_marker__="added:top">
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +         <rule id="rule1" score="200">
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs         <rule id="rule1" score="200">
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +           <expression id="exp1" attribute="#uname" operation="eq" value="node-a"/>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs           <expression id="exp1" attribute="#uname" operation="eq" value="node-a"/>
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +         </rule>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs         </rule>
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +         <rule id="rule2" score="100">
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs         <rule id="rule2" score="100">
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +           <expression id="exp2" attribute="#uname" operation="eq" value="node-b"/>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs           <expression id="exp2" attribute="#uname" operation="eq" value="node-b"/>
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +         </rule>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs         </rule>
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +       </rsc_location>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs       </rsc_location>
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +     </constraints>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs     </constraints>
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: +   </configuration>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs   </configuration>
cib[12122]: 2008/06/13_14:05:34 info: log_data_element: cib:diff: + </cib>
crmd[12126]: 2008/06/13_14:05:34 debug: log_data_element: need_abort: Abort: CIB Attrs </cib>
cib[12122]: 2008/06/13_14:05:34 debug: send_peer_reply: Sending update diff 0.7.9 -> 0.8.1
crmd[12126]: 2008/06/13_14:05:34 debug: abort_transition_graph: te_update_diff:144 - Triggered graph processing (complete=1) : Non-status change
cib[12122]: 2008/06/13_14:05:34 debug: Forking temp process write_cib_contents
crmd[12126]: 2008/06/13_14:05:34 debug: register_fsa_input_adv: abort_transition_graph appended FSA input 27 (I_PE_CALC) (cause=C_FSA_INTERNAL) without data
cib[12140]: 2008/06/13_14:05:34 debug: write_cib_contents: Archiving current version
crmd[12126]: 2008/06/13_14:05:34 debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_IDLE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
cib[12140]: 2008/06/13_14:05:34 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
crmd[12126]: 2008/06/13_14:05:34 info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
cib[12140]: 2008/06/13_14:05:34 debug: archive_file: /var/lib/heartbeat/crm/cib.xml archived as /var/lib/heartbeat/crm/cib.xml.last
crmd[12126]: 2008/06/13_14:05:34 info: do_state_transition: All 2 cluster nodes are eligible to run resources.
crmd[12126]: 2008/06/13_14:05:34 debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
crmd[12126]: 2008/06/13_14:05:34 debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
crmd[12126]: 2008/06/13_14:05:34 debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
crmd[12126]: 2008/06/13_14:05:34 debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
crmd[12126]: 2008/06/13_14:05:34 debug: do_pe_invoke: Requesting the current CIB: S_POLICY_ENGINE
crmd[12126]: 2008/06/13_14:05:34 debug: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1213333534-16, seq=2, quorate=1
pengine[12132]: 2008/06/13_14:05:34 WARN: process_pe_message: Your current configuration only conforms to transitional-0.6
pengine[12132]: 2008/06/13_14:05:34 WARN: process_pe_message: Please use XXX to upgrade pacemaker-0.7
pengine[12132]: 2008/06/13_14:05:34 debug: update_validation: Testing 'transitional-0.6' validation
pengine[12132]: 2008/06/13_14:05:34 info: validate_with: Validating with: /usr/share/heartbeat/crm-transitional.dtd (type=1)
pengine[12132]: 2008/06/13_14:05:34 notice: update_validation: Upgrading transitional-0.6-style configuration to pacemaker-0.7 with /usr/share/heartbeat/upgrade.xsl
cib[12140]: 2008/06/13_14:05:34 debug: archive_file: /var/lib/heartbeat/crm/cib.xml.sig archived as /var/lib/heartbeat/crm/cib.xml.sig.last
cib[12140]: 2008/06/13_14:05:34 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
cib[12140]: 2008/06/13_14:05:34 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml.last (digest: /var/lib/heartbeat/crm/cib.xml.sig.last)
cib[12140]: 2008/06/13_14:05:34 debug: write_cib_contents: Verified CIB archive
cib[12140]: 2008/06/13_14:05:34 debug: write_cib_contents: Wrote CIB to disk
pengine[12132]: 2008/06/13_14:05:34 info: validate_with: Validating with: /usr/share/heartbeat/pacemaker-0.7.rng (type=2)
cib[12140]: 2008/06/13_14:05:34 info: write_cib_contents: Wrote version 0.8.1 of the CIB to disk (digest: 6c0e251272209af00907c86d5bb6351b)
cib[12140]: 2008/06/13_14:05:34 debug: write_cib_contents: Wrote digest to disk
cib[12140]: 2008/06/13_14:05:34 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
cib[12140]: 2008/06/13_14:05:34 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml.last (digest: /var/lib/heartbeat/crm/cib.xml.sig.last)
cib[12140]: 2008/06/13_14:05:34 debug: write_cib_contents: Wrote and verified CIB
cib[12122]: 2008/06/13_14:05:34 info: Managed write_cib_contents process 12140 exited with return code 0.
pengine[12132]: 2008/06/13_14:05:34 info: update_validation: Transformation /usr/share/heartbeat/upgrade.xsl successful
pengine[12132]: 2008/06/13_14:05:34 notice: update_validation: Upgraded from transitional-0.6 to pacemaker-0.7 validation
pengine[12132]: 2008/06/13_14:05:34 WARN: process_pe_message: Your configuration was internally updated to pacemaker-0.7
pengine[12132]: 2008/06/13_14:05:34 debug: unpack_config: Default action timeout: 120s
pengine[12132]: 2008/06/13_14:05:34 debug: unpack_config: Default stickiness: 1000000
pengine[12132]: 2008/06/13_14:05:34 debug: unpack_config: Stop all active resources: false
pengine[12132]: 2008/06/13_14:05:34 debug: unpack_config: Default failure timeout: 0
pengine[12132]: 2008/06/13_14:05:34 debug: unpack_config: Default migration threshold: 1
pengine[12132]: 2008/06/13_14:05:34 debug: unpack_config: STONITH of failed nodes is enabled
pengine[12132]: 2008/06/13_14:05:34 debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
pengine[12132]: 2008/06/13_14:05:34 notice: unpack_config: On loss of CCM Quorum: Ignore
pengine[12132]: 2008/06/13_14:05:34 WARN: unpack_nodes: Blind faith: not fencing unseen nodes
pengine[12132]: 2008/06/13_14:05:34 info: determine_online_status: Node node-b is online
pengine[12132]: 2008/06/13_14:05:34 info: determine_online_status: Node node-a is online
pengine[12132]: 2008/06/13_14:05:34 notice: clone_print: Clone Set: stonith1
pengine[12132]: 2008/06/13_14:05:34 notice: native_print:     stonith1-ssh:0	(stonith:external/ssh):	Stopped 
pengine[12132]: 2008/06/13_14:05:34 notice: native_print:     stonith1-ssh:1	(stonith:external/ssh):	Stopped 
pengine[12132]: 2008/06/13_14:05:34 notice: native_print: dummy	(ocf::heartbeat:Dummy):	Stopped 
pengine[12132]: 2008/06/13_14:05:34 debug: native_assign_node: Assigning node-a to stonith1-ssh:0
pengine[12132]: 2008/06/13_14:05:34 debug: native_assign_node: Assigning node-b to stonith1-ssh:1
pengine[12132]: 2008/06/13_14:05:34 debug: clone_color: Allocated 2 stonith1 instances of a possible 2
pengine[12132]: 2008/06/13_14:05:34 debug: native_assign_node: Assigning node-a to dummy
pengine[12132]: 2008/06/13_14:05:34 notice: StartRsc:  node-a	Start stonith1-ssh:0
pengine[12132]: 2008/06/13_14:05:34 notice: RecurringOp:  Start recurring monitor (10s) for stonith1-ssh:0 on node-a
pengine[12132]: 2008/06/13_14:05:34 notice: StartRsc:  node-b	Start stonith1-ssh:1
pengine[12132]: 2008/06/13_14:05:34 notice: RecurringOp:  Start recurring monitor (10s) for stonith1-ssh:1 on node-b
pengine[12132]: 2008/06/13_14:05:34 debug: child_stopping_constraints: stonith1 has no active children
pengine[12132]: 2008/06/13_14:05:34 notice: StartRsc:  node-a	Start dummy
pengine[12132]: 2008/06/13_14:05:34 notice: RecurringOp:  Start recurring monitor (10s) for dummy on node-a
crmd[12126]: 2008/06/13_14:05:34 debug: register_fsa_input_adv: route_message appended FSA input 28 (I_PE_SUCCESS) (cause=C_IPC_MESSAGE) with data
crmd[12126]: 2008/06/13_14:05:34 debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=route_message ]
crmd[12126]: 2008/06/13_14:05:34 debug: do_fsa_action: actions:trace: 	// A_LOG   
crmd[12126]: 2008/06/13_14:05:34 info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=route_message ]
crmd[12126]: 2008/06/13_14:05:34 debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
crmd[12126]: 2008/06/13_14:05:34 debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
crmd[12126]: 2008/06/13_14:05:34 debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
crmd[12126]: 2008/06/13_14:05:34 debug: do_fsa_action: actions:trace: 	// A_TE_INVOKE
crmd[12126]: 2008/06/13_14:05:34 debug: stop_te_timer: Stopping global timer
crmd[12126]: 2008/06/13_14:05:34 debug: stop_te_timer: global timer was already stopped
crmd[12126]: 2008/06/13_14:05:34 info: unpack_graph: Unpacked transition 1: 15 actions in 15 synapses
crmd[12126]: 2008/06/13_14:05:34 info: do_te_invoke: Processing graph 1 derived from /var/lib/heartbeat/pengine/pe-input-1.bz2
crmd[12126]: 2008/06/13_14:05:34 debug: start_global_timer: Starting abort timer: 60000ms
crmd[12126]: 2008/06/13_14:05:34 debug: te_graph_trigger: Invoking the TE graph in state S_TRANSITION_ENGINE
crmd[12126]: 2008/06/13_14:05:34 debug: initiate_action: Action 4: Increasing IDLE timer to 240000
crmd[12126]: 2008/06/13_14:05:34 info: send_rsc_command: Initiating action 4: monitor stonith1-ssh:0_monitor_0 on node-a
crmd[12126]: 2008/06/13_14:05:34 debug: send_rsc_command: Action 4: Increasing transition 1 timeout to 300000 (2*120000 + 60000)
crmd[12126]: 2008/06/13_14:05:34 info: send_rsc_command: Initiating action 7: monitor stonith1-ssh:1_monitor_0 on node-b
crmd[12126]: 2008/06/13_14:05:34 info: send_rsc_command: Initiating action 5: monitor dummy_monitor_0 on node-a
crmd[12126]: 2008/06/13_14:05:34 info: send_rsc_command: Initiating action 8: monitor dummy_monitor_0 on node-b
crmd[12126]: 2008/06/13_14:05:34 debug: run_graph: Transition 1: (Complete=0, Pending=4, Fired=4, Skipped=0, Incomplete=11)
crmd[12126]: 2008/06/13_14:05:34 debug: te_graph_trigger: Restarting TE timer
crmd[12126]: 2008/06/13_14:05:34 debug: stop_te_timer: Stopping global timer
crmd[12126]: 2008/06/13_14:05:34 debug: start_global_timer: Starting abort timer: 300000ms
pengine[12132]: 2008/06/13_14:05:34 info: process_pe_message: Transition 1: PEngine Input stored in: /var/lib/heartbeat/pengine/pe-input-1.bz2
pengine[12132]: 2008/06/13_14:05:34 info: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
crmd[12126]: 2008/06/13_14:05:35 debug: get_lrm_resource: Adding rsc stonith1-ssh:1 before operation
lrmd[12123]: 2008/06/13_14:05:35 debug: on_msg_add_rsc:client [12126] adds resource stonith1-ssh:1
lrmd[12123]: 2008/06/13_14:05:35 notice: lrmd_rsc_new(): No lrm_rprovider field in message
crmd[12126]: 2008/06/13_14:05:35 info: do_lrm_rsc_op: Performing op=stonith1-ssh:1_monitor_0 key=7:1:7:81471eca-6a9e-410b-b2b2-db41164a8f06)
lrmd[12123]: 2008/06/13_14:05:35 debug: on_msg_perform_op:2290: copying parameters for rsc stonith1-ssh:1
lrmd[12123]: 2008/06/13_14:05:35 debug: on_msg_perform_op: add an operation operation monitor[2] on stonith::external/ssh::stonith1-ssh:1 for client 12126, its parameters: CRM_meta_op_target_rc=[7] hostlist=[node-a node-b] CRM_meta_timeout=[120000] CRM_meta_clone_max=[2] crm_feature_set=[3.0] CRM_meta_globally_unique=[false] CRM_meta_clone=[1] CRM_meta_clone_node_max=[1]  to the operation list.
lrmd[12123]: 2008/06/13_14:05:35 info: rsc:stonith1-ssh:1: monitor
crmd[12126]: 2008/06/13_14:05:35 debug: do_lrm_rsc_op: Recording pending op: 2 - stonith1-ssh:1_monitor_0 stonith1-ssh:1:2
lrmd[12141]: 2008/06/13_14:05:35 debug: stonithd_signon: creating connection
stonithd[12124]: 2008/06/13_14:05:35 debug: client STONITH_RA_EXEC_12141 (pid=12141) succeeded to signon to stonithd.
lrmd[12141]: 2008/06/13_14:05:35 debug: sending out the signon msg.
crmd[12126]: 2008/06/13_14:05:35 debug: get_lrm_resource: Adding rsc dummy before operation
lrmd[12123]: 2008/06/13_14:05:35 debug: on_msg_add_rsc:client [12126] adds resource dummy
crmd[12126]: 2008/06/13_14:05:35 info: do_lrm_rsc_op: Performing op=dummy_monitor_0 key=8:1:7:81471eca-6a9e-410b-b2b2-db41164a8f06)
lrmd[12123]: 2008/06/13_14:05:35 debug: on_msg_perform_op:2290: copying parameters for rsc dummy
lrmd[12123]: 2008/06/13_14:05:35 debug: on_msg_perform_op: add an operation operation monitor[3] on ocf::Dummy::dummy for client 12126, its parameters: CRM_meta_op_target_rc=[7] CRM_meta_timeout=[120000] crm_feature_set=[3.0]  to the operation list.
lrmd[12123]: 2008/06/13_14:05:35 info: rsc:dummy: monitor
crmd[12126]: 2008/06/13_14:05:35 debug: do_lrm_rsc_op: Recording pending op: 3 - dummy_monitor_0 dummy:3
lrmd[12141]: 2008/06/13_14:05:35 debug: signed on to stonithd.
lrmd[12141]: 2008/06/13_14:05:35 debug: waiting for the stonithRA reply msg.
stonithd[12124]: 2008/06/13_14:05:35 debug: client STONITH_RA_EXEC_12141 [pid: 12141] requests a resource operation monitor on stonith1-ssh:1 (external/ssh)
stonithd[12124]: 2008/06/13_14:05:35 debug: stonithRA_monitor: stonith1-ssh:1 is not started.
stonithd[12124]: 2008/06/13_14:05:35 debug: Child process unknown_stonith1-ssh:1_monitor [12143] exited, its exit code: 7 when signo=0.
stonithd[12124]: 2008/06/13_14:05:35 debug: stonith1-ssh:1's (external/ssh) op monitor finished. op_result=7
lrmd[12141]: 2008/06/13_14:05:35 debug: a stonith RA operation queue to run, call_id=12143.
lrmd[12141]: 2008/06/13_14:05:35 debug: stonithd_receive_ops_result: begin
crmd[12126]: 2008/06/13_14:05:35 info: process_lrm_event: LRM operation stonith1-ssh:1_monitor_0 (call=2, rc=7) complete 
lrmd[12123]: 2008/06/13_14:05:35 WARN: Managed stonith1-ssh:1:monitor process 12141 exited with return code 7.
stonithd[12124]: 2008/06/13_14:05:35 debug: client STONITH_RA_EXEC_12141 (pid=12141) signed off
crmd[12126]: 2008/06/13_14:05:35 debug: build_operation_update: Calculated digest c96fa7fdbe97d2d472e37ec6c935a0d1 for stonith1-ssh:1_monitor_0 (0:7;7:1:7:81471eca-6a9e-410b-b2b2-db41164a8f06)

cib[12122]: 2008/06/13_14:05:35 info: validate_xml: Validating configuration with transitional-0.6: /usr/share/heartbeat/crm-transitional.dtd
crmd[12126]: 2008/06/13_14:05:35 debug: log_data_element: build_operation_update: digest:source <parameters hostlist="node-a node-b"/>
cib[12122]: 2008/06/13_14:05:35 info: validate_with: Validating with: /usr/share/heartbeat/crm-transitional.dtd (type=1)
crmd[12126]: 2008/06/13_14:05:35 debug: do_update_resource: Sent resource state update message: 32
crmd[12126]: 2008/06/13_14:05:35 debug: process_lrm_event: Op stonith1-ssh:1_monitor_0 (call=2, stop_id=stonith1-ssh:1:2): Confirmed
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: - <cib num_updates="1"/>
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: + <cib num_updates="2">
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: +   <status>
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: +     <node_state id="db8f2da4-a7fb-40bf-bf14-befe4af11db7">
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: +       <lrm id="db8f2da4-a7fb-40bf-bf14-befe4af11db7">
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: +         <lrm_resources>
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: +           <lrm_resource id="stonith1-ssh:1" type="external/ssh" class="stonith" __crm_diff_marker__="added:top">
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: +             <lrm_rsc_op id="stonith1-ssh:1_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" transition-key="7:1:7:81471eca-6a9e-410b-b2b2-db41164a8f06" transition-magic="0:7;7:1:7:81471eca-6a9e-410b-b2b2-db41164a8f06" call-id="2" crm_feature_set="3.0" rc-code="7" op-status="0" interval="0" last-run="1213333534" last-rc-change="1213333534" exec-time="10" queue-time="0" op-digest="c96fa7fdbe97d2d472e37ec6c935a0d1"/>
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: +           </lrm_resource>
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: +         </lrm_resources>
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: +       </lrm>
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: +     </node_state>
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: +   </status>
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: + </cib>
cib[12122]: 2008/06/13_14:05:35 debug: send_peer_reply: Sending update diff 0.8.1 -> 0.8.2
crmd[12126]: 2008/06/13_14:05:35 debug: te_update_diff: Processing diff (cib_modify): 0.8.1 -> 0.8.2 (S_TRANSITION_ENGINE)
crmd[12126]: 2008/06/13_14:05:35 info: match_graph_event: Action stonith1-ssh:1_monitor_0 (7) confirmed on node-b (rc=0)
crmd[12126]: 2008/06/13_14:05:35 debug: cib_rsc_callback: Resource update 32 complete: rc=0
crmd[12126]: 2008/06/13_14:05:35 debug: te_graph_trigger: Invoking the TE graph in state S_TRANSITION_ENGINE
crmd[12126]: 2008/06/13_14:05:35 debug: run_graph: Transition 1: (Complete=1, Pending=3, Fired=0, Skipped=0, Incomplete=11)
Dummy[12142][12149]: 2008/06/13_14:05:35 DEBUG: dummy monitor : 7
lrmd[12123]: 2008/06/13_14:05:35 WARN: Managed dummy:monitor process 12142 exited with return code 7.
crmd[12126]: 2008/06/13_14:05:35 info: process_lrm_event: LRM operation dummy_monitor_0 (call=3, rc=7) complete 
crmd[12126]: 2008/06/13_14:05:35 debug: build_operation_update: Calculated digest f2317cad3d54cec5d7d7aa7d0bf35cf8 for dummy_monitor_0 (0:7;8:1:7:81471eca-6a9e-410b-b2b2-db41164a8f06)

crmd[12126]: 2008/06/13_14:05:35 debug: log_data_element: build_operation_update: digest:source <parameters/>
crmd[12126]: 2008/06/13_14:05:35 debug: do_update_resource: Sent resource state update message: 33
cib[12122]: 2008/06/13_14:05:35 info: validate_xml: Validating configuration with transitional-0.6: /usr/share/heartbeat/crm-transitional.dtd
crmd[12126]: 2008/06/13_14:05:35 debug: process_lrm_event: Op dummy_monitor_0 (call=3, stop_id=dummy:3): Confirmed
cib[12122]: 2008/06/13_14:05:35 info: validate_with: Validating with: /usr/share/heartbeat/crm-transitional.dtd (type=1)
crmd[12126]: 2008/06/13_14:05:35 debug: te_update_diff: Processing diff (cib_modify): 0.8.2 -> 0.8.3 (S_TRANSITION_ENGINE)
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: - <cib num_updates="2"/>
crmd[12126]: 2008/06/13_14:05:35 info: match_graph_event: Action dummy_monitor_0 (8) confirmed on node-b (rc=0)
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: + <cib num_updates="3">
crmd[12126]: 2008/06/13_14:05:35 debug: cib_rsc_callback: Resource update 33 complete: rc=0
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: +   <status>
crmd[12126]: 2008/06/13_14:05:35 debug: te_graph_trigger: Invoking the TE graph in state S_TRANSITION_ENGINE
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: +     <node_state id="db8f2da4-a7fb-40bf-bf14-befe4af11db7">
crmd[12126]: 2008/06/13_14:05:35 info: send_rsc_command: Initiating action 6: probe_complete probe_complete on node-b
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: +       <lrm id="db8f2da4-a7fb-40bf-bf14-befe4af11db7">
crmd[12126]: 2008/06/13_14:05:35 debug: send_rsc_command: Skipping wait for 6
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: +         <lrm_resources>
crmd[12126]: 2008/06/13_14:05:35 debug: run_graph: Transition 1: (Complete=2, Pending=2, Fired=1, Skipped=0, Incomplete=10)
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: +           <lrm_resource id="dummy" type="Dummy" class="ocf" provider="heartbeat" __crm_diff_marker__="added:top">
crmd[12126]: 2008/06/13_14:05:35 debug: te_graph_trigger: Restarting TE timer
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: +             <lrm_rsc_op id="dummy_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" transition-key="8:1:7:81471eca-6a9e-410b-b2b2-db41164a8f06" transition-magic="0:7;8:1:7:81471eca-6a9e-410b-b2b2-db41164a8f06" call-id="3" crm_feature_set="3.0" rc-code="7" op-status="0" interval="0" last-run="1213333534" last-rc-change="1213333534" exec-time="40" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
crmd[12126]: 2008/06/13_14:05:35 debug: stop_te_timer: Stopping global timer
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: +           </lrm_resource>
crmd[12126]: 2008/06/13_14:05:35 debug: start_global_timer: Starting abort timer: 300000ms
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: +         </lrm_resources>
crmd[12126]: 2008/06/13_14:05:35 debug: te_graph_trigger: Invoking the TE graph in state S_TRANSITION_ENGINE
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: +       </lrm>
crmd[12126]: 2008/06/13_14:05:35 debug: run_graph: Transition 1: (Complete=3, Pending=2, Fired=0, Skipped=0, Incomplete=10)
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: +     </node_state>
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: +   </status>
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: + </cib>
cib[12122]: 2008/06/13_14:05:35 debug: send_peer_reply: Sending update diff 0.8.2 -> 0.8.3
heartbeat[12113]: 2008/06/13_14:05:35 debug: rexmit request from node node-a for msg(96-96)
heartbeat[12113]: 2008/06/13_14:05:35 info: Retransmitting pkt 96
heartbeat[12113]: 2008/06/13_14:05:35 info: msg size =2804, type=cib
heartbeat[12113]: 2008/06/13_14:05:35 debug: rexmit request from node node-a for msg(96-96)
cib[12122]: 2008/06/13_14:05:35 info: validate_xml: Validating configuration with transitional-0.6: /usr/share/heartbeat/crm-transitional.dtd
cib[12122]: 2008/06/13_14:05:35 info: validate_with: Validating with: /usr/share/heartbeat/crm-transitional.dtd (type=1)
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: - <cib num_updates="3"/>
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: + <cib num_updates="4">
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: +   <status>
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: +     <node_state id="8029f8c4-1f03-4695-a78a-29c02fdd399c">
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: +       <lrm id="8029f8c4-1f03-4695-a78a-29c02fdd399c">
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: +         <lrm_resources>
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: +           <lrm_resource id="stonith1-ssh:0" type="external/ssh" class="stonith" __crm_diff_marker__="added:top">
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: +             <lrm_rsc_op id="stonith1-ssh:0_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" transition-key="4:1:7:81471eca-6a9e-410b-b2b2-db41164a8f06" transition-magic="0:7;4:1:7:81471eca-6a9e-410b-b2b2-db41164a8f06" call-id="2" crm_feature_set="3.0" rc-code="7" op-status="0" interval="0" last-run="1213333534" last-rc-change="1213333534" exec-time="10" queue-time="0" op-digest="c96fa7fdbe97d2d472e37ec6c935a0d1"/>
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: +           </lrm_resource>
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: +         </lrm_resources>
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: +       </lrm>
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: +     </node_state>
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: +   </status>
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: + </cib>
cib[12122]: 2008/06/13_14:05:35 debug: send_peer_reply: Sending update diff 0.8.3 -> 0.8.4
cib[12122]: 2008/06/13_14:05:35 info: validate_xml: Validating configuration with transitional-0.6: /usr/share/heartbeat/crm-transitional.dtd
cib[12122]: 2008/06/13_14:05:35 info: validate_with: Validating with: /usr/share/heartbeat/crm-transitional.dtd (type=1)
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: - <cib num_updates="4"/>
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: + <cib num_updates="5">
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: +   <status>
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: +     <node_state id="8029f8c4-1f03-4695-a78a-29c02fdd399c">
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: +       <lrm id="8029f8c4-1f03-4695-a78a-29c02fdd399c">
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: +         <lrm_resources>
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: +           <lrm_resource id="dummy" type="Dummy" class="ocf" provider="heartbeat" __crm_diff_marker__="added:top">
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: +             <lrm_rsc_op id="dummy_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" transition-key="5:1:7:81471eca-6a9e-410b-b2b2-db41164a8f06" transition-magic="0:7;5:1:7:81471eca-6a9e-410b-b2b2-db41164a8f06" call-id="3" crm_feature_set="3.0" rc-code="7" op-status="0" interval="0" last-run="1213333534" last-rc-change="1213333534" exec-time="30" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: +           </lrm_resource>
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: +         </lrm_resources>
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: +       </lrm>
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: +     </node_state>
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: +   </status>
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: + </cib>
cib[12122]: 2008/06/13_14:05:35 debug: send_peer_reply: Sending update diff 0.8.4 -> 0.8.5
crmd[12126]: 2008/06/13_14:05:35 debug: te_update_diff: Processing diff (cib_modify): 0.8.3 -> 0.8.4 (S_TRANSITION_ENGINE)
cib[12122]: 2008/06/13_14:05:35 info: validate_xml: Validating configuration with transitional-0.6: /usr/share/heartbeat/crm-transitional.dtd
crmd[12126]: 2008/06/13_14:05:35 info: match_graph_event: Action stonith1-ssh:0_monitor_0 (4) confirmed on node-a (rc=0)
cib[12122]: 2008/06/13_14:05:35 info: validate_with: Validating with: /usr/share/heartbeat/crm-transitional.dtd (type=1)
crmd[12126]: 2008/06/13_14:05:35 debug: te_update_diff: Processing diff (cib_modify): 0.8.4 -> 0.8.5 (S_TRANSITION_ENGINE)
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: - <cib num_updates="5"/>
crmd[12126]: 2008/06/13_14:05:35 info: match_graph_event: Action dummy_monitor_0 (5) confirmed on node-a (rc=0)
cib[12122]: 2008/06/13_14:05:35 debug: log_data_element: cib:diff: + <cib num_updates="6"/>
crmd[12126]: 2008/06/13_14:05:35 debug: te_graph_trigger: Invoking the TE graph in state S_TRANSITION_ENGINE
cib[12122]: 2008/06/13_14:05:35 debug: send_peer_reply: Sending update diff 0.8.5 -> 0.8.6
crmd[12126]: 2008/06/13_14:05:35 info: send_rsc_command: Initiating action 3: probe_complete probe_complete on node-a
crmd[12126]: 2008/06/13_14:05:35 debug: send_rsc_command: Skipping wait for 3
crmd[12126]: 2008/06/13_14:05:35 debug: run_graph: Transition 1: (Complete=5, Pending=0, Fired=1, Skipped=0, Incomplete=9)
crmd[12126]: 2008/06/13_14:05:35 debug: te_graph_trigger: Restarting TE timer
crmd[12126]: 2008/06/13_14:05:35 debug: stop_te_timer: Stopping global timer
crmd[12126]: 2008/06/13_14:05:35 debug: start_global_timer: Starting abort timer: 300000ms
crmd[12126]: 2008/06/13_14:05:35 debug: te_graph_trigger: Invoking the TE graph in state S_TRANSITION_ENGINE
crmd[12126]: 2008/06/13_14:05:35 info: te_pseudo_action: Pseudo action 2 fired and confirmed
crmd[12126]: 2008/06/13_14:05:35 debug: run_graph: Transition 1: (Complete=6, Pending=0, Fired=1, Skipped=0, Incomplete=8)
crmd[12126]: 2008/06/13_14:05:35 debug: te_graph_trigger: Restarting TE timer
crmd[12126]: 2008/06/13_14:05:35 debug: stop_te_timer: Stopping global timer
crmd[12126]: 2008/06/13_14:05:35 debug: start_global_timer: Starting abort timer: 300000ms
crmd[12126]: 2008/06/13_14:05:35 debug: te_graph_trigger: Invoking the TE graph in state S_TRANSITION_ENGINE
crmd[12126]: 2008/06/13_14:05:35 info: te_pseudo_action: Pseudo action 13 fired and confirmed
crmd[12126]: 2008/06/13_14:05:35 info: send_rsc_command: Initiating action 17: start dummy_start_0 on node-a
crmd[12126]: 2008/06/13_14:05:35 debug: run_graph: Transition 1: (Complete=7, Pending=1, Fired=2, Skipped=0, Incomplete=6)
crmd[12126]: 2008/06/13_14:05:35 debug: te_graph_trigger: Restarting TE timer
crmd[12126]: 2008/06/13_14:05:35 debug: stop_te_timer: Stopping global timer
crmd[12126]: 2008/06/13_14:05:35 debug: start_global_timer: Starting abort timer: 300000ms
crmd[12126]: 2008/06/13_14:05:35 debug: te_graph_trigger: Invoking the TE graph in state S_TRANSITION_ENGINE
crmd[12126]: 2008/06/13_14:05:35 info: send_rsc_command: Initiating action 9: start stonith1-ssh:0_start_0 on node-a
crmd[12126]: 2008/06/13_14:05:35 info: send_rsc_command: Initiating action 11: start stonith1-ssh:1_start_0 on node-b
crmd[12126]: 2008/06/13_14:05:35 debug: run_graph: Transition 1: (Complete=8, Pending=3, Fired=2, Skipped=0, Incomplete=4)
crmd[12126]: 2008/06/13_14:05:35 debug: te_graph_trigger: Restarting TE timer
crmd[12126]: 2008/06/13_14:05:35 debug: stop_te_timer: Stopping global timer
crmd[12126]: 2008/06/13_14:05:35 debug: start_global_timer: Starting abort timer: 300000ms
heartbeat[12113]: 2008/06/13_14:05:36 debug: rexmit request from node node-a for msg(96-96)
heartbeat[12113]: 2008/06/13_14:05:36 info: Retransmitting pkt 96
heartbeat[12113]: 2008/06/13_14:05:36 info: msg size =2804, type=cib
heartbeat[12113]: 2008/06/13_14:05:36 debug: rexmit request from node node-a for msg(96-96)
crmd[12126]: 2008/06/13_14:05:36 info: do_lrm_rsc_op: Performing op=stonith1-ssh:1_start_0 key=11:1:0:81471eca-6a9e-410b-b2b2-db41164a8f06)
lrmd[12123]: 2008/06/13_14:05:36 debug: on_msg_perform_op:2290: copying parameters for rsc stonith1-ssh:1
lrmd[12123]: 2008/06/13_14:05:36 debug: on_msg_perform_op: add an operation operation start[4] on stonith::external/ssh::stonith1-ssh:1 for client 12126, its parameters: hostlist=[node-a node-b] CRM_meta_timeout=[30000] CRM_meta_clone_max=[2] crm_feature_set=[3.0] CRM_meta_globally_unique=[false] CRM_meta_name=[start] CRM_meta_clone=[1] CRM_meta_clone_node_max=[1]  to the operation list.
lrmd[12123]: 2008/06/13_14:05:36 info: rsc:stonith1-ssh:1: start
lrmd[12151]: 2008/06/13_14:05:36 debug: stonithd_signon: creating connection
lrmd[12151]: 2008/06/13_14:05:36 debug: sending out the signon msg.
crmd[12126]: 2008/06/13_14:05:36 debug: do_lrm_rsc_op: Recording pending op: 4 - stonith1-ssh:1_start_0 stonith1-ssh:1:4
stonithd[12124]: 2008/06/13_14:05:36 debug: client STONITH_RA_EXEC_12151 (pid=12151) succeeded to signon to stonithd.
lrmd[12151]: 2008/06/13_14:05:36 debug: signed on to stonithd.
lrmd[12151]: 2008/06/13_14:05:36 info: Try to start STONITH resource <rsc_id=stonith1-ssh:1> : Device=external/ssh
stonithd[12124]: 2008/06/13_14:05:36 debug: client STONITH_RA_EXEC_12151 [pid: 12151] requests a resource operation start on stonith1-ssh:1 (external/ssh)
stonithd[12124]: 2008/06/13_14:05:36 debug: external_set_config: called.
stonithd[12124]: 2008/06/13_14:05:36 debug: external_get_confignames: called.
stonithd[12124]: 2008/06/13_14:05:36 debug: external_run_cmd: Calling '/usr/lib64/stonith/plugins/external/ssh getconfignames'
lrmd[12151]: 2008/06/13_14:05:36 debug: waiting for the stonithRA reply msg.
stonithd[12124]: 2008/06/13_14:05:36 debug: external_run_cmd: '/usr/lib64/stonith/plugins/external/ssh getconfignames' output: hostlist

stonithd[12124]: 2008/06/13_14:05:36 debug: external_get_confignames: 'ssh getconfignames' returned 0
stonithd[12124]: 2008/06/13_14:05:36 debug: external_get_confignames: ssh configname hostlist
stonithd[12156]: 2008/06/13_14:05:36 debug: external_status: called.
stonithd[12156]: 2008/06/13_14:05:36 debug: external_run_cmd: Calling '/usr/lib64/stonith/plugins/external/ssh status'
lrmd[12151]: 2008/06/13_14:05:36 debug: a stonith RA operation queue to run, call_id=12156.
lrmd[12151]: 2008/06/13_14:05:36 debug: stonithd_receive_ops_result: begin
cib[12122]: 2008/06/13_14:05:37 info: validate_xml: Validating configuration with transitional-0.6: /usr/share/heartbeat/crm-transitional.dtd
cib[12122]: 2008/06/13_14:05:37 info: validate_with: Validating with: /usr/share/heartbeat/crm-transitional.dtd (type=1)
cib[12122]: 2008/06/13_14:05:37 debug: log_data_element: cib:diff: - <cib num_updates="6"/>
cib[12122]: 2008/06/13_14:05:37 debug: log_data_element: cib:diff: + <cib num_updates="7"/>
cib[12122]: 2008/06/13_14:05:37 debug: send_peer_reply: Sending update diff 0.8.6 -> 0.8.7
cib[12122]: 2008/06/13_14:05:37 info: validate_xml: Validating configuration with transitional-0.6: /usr/share/heartbeat/crm-transitional.dtd
cib[12122]: 2008/06/13_14:05:37 info: validate_with: Validating with: /usr/share/heartbeat/crm-transitional.dtd (type=1)
crmd[12126]: 2008/06/13_14:05:37 debug: te_update_diff: Processing diff (cib_modify): 0.8.7 -> 0.8.8 (S_TRANSITION_ENGINE)
cib[12122]: 2008/06/13_14:05:37 debug: log_data_element: cib:diff: - <cib num_updates="7"/>
crmd[12126]: 2008/06/13_14:05:37 info: match_graph_event: Action dummy_start_0 (17) confirmed on node-a (rc=0)
cib[12122]: 2008/06/13_14:05:37 debug: log_data_element: cib:diff: + <cib num_updates="8">
crmd[12126]: 2008/06/13_14:05:37 debug: te_graph_trigger: Invoking the TE graph in state S_TRANSITION_ENGINE
cib[12122]: 2008/06/13_14:05:37 debug: log_data_element: cib:diff: +   <status>
crmd[12126]: 2008/06/13_14:05:37 info: send_rsc_command: Initiating action 18: monitor dummy_monitor_10000 on node-a
cib[12122]: 2008/06/13_14:05:37 debug: log_data_element: cib:diff: +     <node_state id="8029f8c4-1f03-4695-a78a-29c02fdd399c">
crmd[12126]: 2008/06/13_14:05:37 debug: run_graph: Transition 1: (Complete=9, Pending=3, Fired=1, Skipped=0, Incomplete=3)
cib[12122]: 2008/06/13_14:05:37 debug: log_data_element: cib:diff: +       <lrm id="8029f8c4-1f03-4695-a78a-29c02fdd399c">
crmd[12126]: 2008/06/13_14:05:37 debug: te_graph_trigger: Restarting TE timer
cib[12122]: 2008/06/13_14:05:37 debug: log_data_element: cib:diff: +         <lrm_resources>
crmd[12126]: 2008/06/13_14:05:37 debug: stop_te_timer: Stopping global timer
cib[12122]: 2008/06/13_14:05:37 debug: log_data_element: cib:diff: +           <lrm_resource id="dummy">
crmd[12126]: 2008/06/13_14:05:37 debug: start_global_timer: Starting abort timer: 300000ms
cib[12122]: 2008/06/13_14:05:37 debug: log_data_element: cib:diff: +             <lrm_rsc_op id="dummy_start_0" operation="start" crm-debug-origin="do_update_resource" transition-key="17:1:0:81471eca-6a9e-410b-b2b2-db41164a8f06" transition-magic="0:0;17:1:0:81471eca-6a9e-410b-b2b2-db41164a8f06" call-id="4" crm_feature_set="3.0" rc-code="0" op-status="0" interval="0" last-run="1213333535" last-rc-change="1213333535" exec-time="40" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8" op-force-restart=" state " op-restart-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8" __crm_diff_marker__="added:top"/>
cib[12122]: 2008/06/13_14:05:37 debug: log_data_element: cib:diff: +           </lrm_resource>
cib[12122]: 2008/06/13_14:05:37 debug: log_data_element: cib:diff: +         </lrm_resources>
cib[12122]: 2008/06/13_14:05:37 debug: log_data_element: cib:diff: +       </lrm>
cib[12122]: 2008/06/13_14:05:37 debug: log_data_element: cib:diff: +     </node_state>
cib[12122]: 2008/06/13_14:05:37 debug: log_data_element: cib:diff: +   </status>
cib[12122]: 2008/06/13_14:05:37 debug: log_data_element: cib:diff: + </cib>
cib[12122]: 2008/06/13_14:05:37 debug: send_peer_reply: Sending update diff 0.8.7 -> 0.8.8
cib[12122]: 2008/06/13_14:05:37 info: validate_xml: Validating configuration with transitional-0.6: /usr/share/heartbeat/crm-transitional.dtd
cib[12122]: 2008/06/13_14:05:37 info: validate_with: Validating with: /usr/share/heartbeat/crm-transitional.dtd (type=1)
crmd[12126]: 2008/06/13_14:05:37 debug: te_update_diff: Processing diff (cib_modify): 0.8.8 -> 0.8.9 (S_TRANSITION_ENGINE)
cib[12122]: 2008/06/13_14:05:37 debug: log_data_element: cib:diff: - <cib num_updates="8"/>
crmd[12126]: 2008/06/13_14:05:37 info: match_graph_event: Action stonith1-ssh:0_start_0 (9) confirmed on node-a (rc=0)
cib[12122]: 2008/06/13_14:05:37 debug: log_data_element: cib:diff: + <cib num_updates="9">
crmd[12126]: 2008/06/13_14:05:37 debug: te_graph_trigger: Invoking the TE graph in state S_TRANSITION_ENGINE
cib[12122]: 2008/06/13_14:05:37 debug: log_data_element: cib:diff: +   <status>
crmd[12126]: 2008/06/13_14:05:37 info: send_rsc_command: Initiating action 10: monitor stonith1-ssh:0_monitor_10000 on node-a
cib[12122]: 2008/06/13_14:05:37 debug: log_data_element: cib:diff: +     <node_state id="8029f8c4-1f03-4695-a78a-29c02fdd399c">
crmd[12126]: 2008/06/13_14:05:37 debug: run_graph: Transition 1: (Complete=10, Pending=3, Fired=1, Skipped=0, Incomplete=2)
cib[12122]: 2008/06/13_14:05:37 debug: log_data_element: cib:diff: +       <lrm id="8029f8c4-1f03-4695-a78a-29c02fdd399c">
crmd[12126]: 2008/06/13_14:05:37 debug: te_graph_trigger: Restarting TE timer
cib[12122]: 2008/06/13_14:05:37 debug: log_data_element: cib:diff: +         <lrm_resources>
crmd[12126]: 2008/06/13_14:05:37 debug: stop_te_timer: Stopping global timer
cib[12122]: 2008/06/13_14:05:37 debug: log_data_element: cib:diff: +           <lrm_resource id="stonith1-ssh:0">
crmd[12126]: 2008/06/13_14:05:37 debug: start_global_timer: Starting abort timer: 300000ms
cib[12122]: 2008/06/13_14:05:37 debug: log_data_element: cib:diff: +             <lrm_rsc_op id="stonith1-ssh:0_start_0" operation="start" crm-debug-origin="do_update_resource" transition-key="9:1:0:81471eca-6a9e-410b-b2b2-db41164a8f06" transition-magic="0:0;9:1:0:81471eca-6a9e-410b-b2b2-db41164a8f06" call-id="5" crm_feature_set="3.0" rc-code="0" op-status="0" interval="0" last-run="1213333535" last-rc-change="1213333535" exec-time="70" queue-time="0" op-digest="c96fa7fdbe97d2d472e37ec6c935a0d1" __crm_diff_marker__="added:top"/>
cib[12122]: 2008/06/13_14:05:37 debug: log_data_element: cib:diff: +           </lrm_resource>
cib[12122]: 2008/06/13_14:05:37 debug: log_data_element: cib:diff: +         </lrm_resources>
cib[12122]: 2008/06/13_14:05:37 debug: log_data_element: cib:diff: +       </lrm>
cib[12122]: 2008/06/13_14:05:37 debug: log_data_element: cib:diff: +     </node_state>
cib[12122]: 2008/06/13_14:05:37 debug: log_data_element: cib:diff: +   </status>
cib[12122]: 2008/06/13_14:05:37 debug: log_data_element: cib:diff: + </cib>
cib[12122]: 2008/06/13_14:05:37 debug: send_peer_reply: Sending update diff 0.8.8 -> 0.8.9
stonithd[12156]: 2008/06/13_14:05:37 debug: external_run_cmd: '/usr/lib64/stonith/plugins/external/ssh status' output: 
stonithd[12156]: 2008/06/13_14:05:37 debug: external_status: running 'ssh status' returned 0
stonithd[12156]: 2008/06/13_14:05:37 debug: external_hostlist: called.
stonithd[12156]: 2008/06/13_14:05:37 debug: external_run_cmd: Calling '/usr/lib64/stonith/plugins/external/ssh gethosts'
stonithd[12156]: 2008/06/13_14:05:37 debug: external_run_cmd: '/usr/lib64/stonith/plugins/external/ssh gethosts' output: node-a
node-b

stonithd[12156]: 2008/06/13_14:05:37 debug: external_hostlist: running 'ssh gethosts' returned 0
stonithd[12156]: 2008/06/13_14:05:37 debug: external_hostlist: ssh host node-a
stonithd[12156]: 2008/06/13_14:05:37 debug: external_hostlist: ssh host node-b
stonithd[12156]: 2008/06/13_14:05:37 debug: stonith1-ssh:1 claims it can manage node node-a
stonithd[12156]: 2008/06/13_14:05:37 debug: remove us (node-b) from the host list for stonith1-ssh:1
stonithd[12124]: 2008/06/13_14:05:37 debug: Child process external_stonith1-ssh:1_start [12156] exited, its exit code: 0 when signo=0.
stonithd[12124]: 2008/06/13_14:05:37 debug: stonith1-ssh:1's (external/ssh) op start finished. op_result=0
stonithd[12124]: 2008/06/13_14:05:37 debug: client STONITH_RA_EXEC_12151 (pid=12151) signed off
lrmd[12123]: 2008/06/13_14:05:37 info: Managed stonith1-ssh:1:start process 12151 exited with return code 0.
crmd[12126]: 2008/06/13_14:05:37 info: process_lrm_event: LRM operation stonith1-ssh:1_start_0 (call=4, rc=0) complete 
crmd[12126]: 2008/06/13_14:05:37 debug: build_operation_update: Calculated digest c96fa7fdbe97d2d472e37ec6c935a0d1 for stonith1-ssh:1_start_0 (0:0;11:1:0:81471eca-6a9e-410b-b2b2-db41164a8f06)

crmd[12126]: 2008/06/13_14:05:37 debug: log_data_element: build_operation_update: digest:source <parameters hostlist="node-a node-b"/>
crmd[12126]: 2008/06/13_14:05:37 debug: get_rsc_metadata: Retreiving metadata for external/ssh::stonith:heartbeat
lrmd[12123]: 2008/06/13_14:05:37 debug: stonithRA plugin: provider attribute is not needed and will be ignored.
crmd[12126]: 2008/06/13_14:05:37 debug: append_restart_list: Resource stonith1-ssh:1 does not support reloads
crmd[12126]: 2008/06/13_14:05:37 debug: do_update_resource: Sent resource state update message: 36
crmd[12126]: 2008/06/13_14:05:37 debug: process_lrm_event: Op stonith1-ssh:1_start_0 (call=4, stop_id=stonith1-ssh:1:4): Confirmed
cib[12122]: 2008/06/13_14:05:37 info: validate_xml: Validating configuration with transitional-0.6: /usr/share/heartbeat/crm-transitional.dtd
cib[12122]: 2008/06/13_14:05:37 info: validate_with: Validating with: /usr/share/heartbeat/crm-transitional.dtd (type=1)
cib[12122]: 2008/06/13_14:05:37 debug: log_data_element: cib:diff: - <cib num_updates="9"/>
cib[12122]: 2008/06/13_14:05:37 debug: log_data_element: cib:diff: + <cib num_updates="10">
cib[12122]: 2008/06/13_14:05:37 debug: log_data_element: cib:diff: +   <status>
cib[12122]: 2008/06/13_14:05:37 debug: log_data_element: cib:diff: +     <node_state id="db8f2da4-a7fb-40bf-bf14-befe4af11db7">
cib[12122]: 2008/06/13_14:05:37 debug: log_data_element: cib:diff: +       <lrm id="db8f2da4-a7fb-40bf-bf14-befe4af11db7">
cib[12122]: 2008/06/13_14:05:37 debug: log_data_element: cib:diff: +         <lrm_resources>
cib[12122]: 2008/06/13_14:05:37 debug: log_data_element: cib:diff: +           <lrm_resource id="stonith1-ssh:1">
cib[12122]: 2008/06/13_14:05:37 debug: log_data_element: cib:diff: +             <lrm_rsc_op id="stonith1-ssh:1_start_0" operation="start" crm-debug-origin="do_update_resource" transition-key="11:1:0:81471eca-6a9e-410b-b2b2-db41164a8f06" transition-magic="0:0;11:1:0:81471eca-6a9e-410b-b2b2-db41164a8f06" call-id="4" crm_feature_set="3.0" rc-code="0" op-status="0" interval="0" last-run="1213333535" last-rc-change="1213333535" exec-time="1040" queue-time="0" op-digest="c96fa7fdbe97d2d472e37ec6c935a0d1" __crm_diff_marker__="added:top"/>
cib[12122]: 2008/06/13_14:05:37 debug: log_data_element: cib:diff: +           </lrm_resource>
cib[12122]: 2008/06/13_14:05:37 debug: log_data_element: cib:diff: +         </lrm_resources>
cib[12122]: 2008/06/13_14:05:37 debug: log_data_element: cib:diff: +       </lrm>
cib[12122]: 2008/06/13_14:05:37 debug: log_data_element: cib:diff: +     </node_state>
cib[12122]: 2008/06/13_14:05:37 debug: log_data_element: cib:diff: +   </status>
cib[12122]: 2008/06/13_14:05:37 debug: log_data_element: cib:diff: + </cib>
cib[12122]: 2008/06/13_14:05:37 debug: send_peer_reply: Sending update diff 0.8.9 -> 0.8.10
crmd[12126]: 2008/06/13_14:05:37 debug: te_update_diff: Processing diff (cib_modify): 0.8.9 -> 0.8.10 (S_TRANSITION_ENGINE)
crmd[12126]: 2008/06/13_14:05:37 info: match_graph_event: Action stonith1-ssh:1_start_0 (11) confirmed on node-b (rc=0)
crmd[12126]: 2008/06/13_14:05:37 debug: cib_rsc_callback: Resource update 36 complete: rc=0
crmd[12126]: 2008/06/13_14:05:37 debug: te_graph_trigger: Invoking the TE graph in state S_TRANSITION_ENGINE
crmd[12126]: 2008/06/13_14:05:37 info: send_rsc_command: Initiating action 12: monitor stonith1-ssh:1_monitor_10000 on node-b
crmd[12126]: 2008/06/13_14:05:37 info: te_pseudo_action: Pseudo action 14 fired and confirmed
crmd[12126]: 2008/06/13_14:05:37 debug: run_graph: Transition 1: (Complete=11, Pending=3, Fired=2, Skipped=0, Incomplete=0)
crmd[12126]: 2008/06/13_14:05:37 debug: te_graph_trigger: Restarting TE timer
crmd[12126]: 2008/06/13_14:05:37 debug: stop_te_timer: Stopping global timer
crmd[12126]: 2008/06/13_14:05:37 debug: start_global_timer: Starting abort timer: 300000ms
crmd[12126]: 2008/06/13_14:05:37 debug: te_graph_trigger: Invoking the TE graph in state S_TRANSITION_ENGINE
crmd[12126]: 2008/06/13_14:05:37 debug: run_graph: Transition 1: (Complete=12, Pending=3, Fired=0, Skipped=0, Incomplete=0)
crmd[12126]: 2008/06/13_14:05:37 info: do_lrm_rsc_op: Performing op=stonith1-ssh:1_monitor_10000 key=12:1:0:81471eca-6a9e-410b-b2b2-db41164a8f06)
lrmd[12123]: 2008/06/13_14:05:37 debug: on_msg_perform_op: add an operation operation monitor[5] on stonith::external/ssh::stonith1-ssh:1 for client 12126, its parameters: CRM_meta_interval=[10000] hostlist=[node-a node-b] CRM_meta_timeout=[30000] CRM_meta_clone_max=[2] crm_feature_set=[3.0] CRM_meta_globally_unique=[false] CRM_meta_name=[monitor] CRM_meta_clone=[1] CRM_meta_clone_node_max=[1]  to the operation list.
crmd[12126]: 2008/06/13_14:05:37 debug: do_lrm_rsc_op: Recording pending op: 5 - stonith1-ssh:1_monitor_10000 stonith1-ssh:1:5
lrmd[12182]: 2008/06/13_14:05:37 debug: stonithd_signon: creating connection
lrmd[12182]: 2008/06/13_14:05:37 debug: sending out the signon msg.
stonithd[12124]: 2008/06/13_14:05:37 debug: client STONITH_RA_EXEC_12182 (pid=12182) succeeded to signon to stonithd.
lrmd[12182]: 2008/06/13_14:05:37 debug: signed on to stonithd.
lrmd[12182]: 2008/06/13_14:05:37 debug: waiting for the stonithRA reply msg.
stonithd[12124]: 2008/06/13_14:05:37 debug: client STONITH_RA_EXEC_12182 [pid: 12182] requests a resource operation monitor on stonith1-ssh:1 (external/ssh)
stonithd[12183]: 2008/06/13_14:05:37 debug: external_status: called.
stonithd[12183]: 2008/06/13_14:05:37 debug: external_run_cmd: Calling '/usr/lib64/stonith/plugins/external/ssh status'
lrmd[12182]: 2008/06/13_14:05:37 debug: a stonith RA operation queue to run, call_id=12183.
lrmd[12182]: 2008/06/13_14:05:37 debug: stonithd_receive_ops_result: begin
cib[12122]: 2008/06/13_14:05:38 info: validate_xml: Validating configuration with transitional-0.6: /usr/share/heartbeat/crm-transitional.dtd
cib[12122]: 2008/06/13_14:05:38 info: validate_with: Validating with: /usr/share/heartbeat/crm-transitional.dtd (type=1)
crmd[12126]: 2008/06/13_14:05:38 debug: te_update_diff: Processing diff (cib_modify): 0.8.10 -> 0.8.11 (S_TRANSITION_ENGINE)
cib[12122]: 2008/06/13_14:05:38 debug: log_data_element: cib:diff: - <cib num_updates="10"/>
crmd[12126]: 2008/06/13_14:05:38 info: match_graph_event: Action dummy_monitor_10000 (18) confirmed on node-a (rc=0)
cib[12122]: 2008/06/13_14:05:38 debug: log_data_element: cib:diff: + <cib num_updates="11">
crmd[12126]: 2008/06/13_14:05:38 debug: te_graph_trigger: Invoking the TE graph in state S_TRANSITION_ENGINE
cib[12122]: 2008/06/13_14:05:38 debug: log_data_element: cib:diff: +   <status>
crmd[12126]: 2008/06/13_14:05:38 debug: run_graph: Transition 1: (Complete=13, Pending=2, Fired=0, Skipped=0, Incomplete=0)
cib[12122]: 2008/06/13_14:05:38 debug: log_data_element: cib:diff: +     <node_state id="8029f8c4-1f03-4695-a78a-29c02fdd399c">
cib[12122]: 2008/06/13_14:05:38 debug: log_data_element: cib:diff: +       <lrm id="8029f8c4-1f03-4695-a78a-29c02fdd399c">
cib[12122]: 2008/06/13_14:05:38 debug: log_data_element: cib:diff: +         <lrm_resources>
cib[12122]: 2008/06/13_14:05:38 debug: log_data_element: cib:diff: +           <lrm_resource id="dummy">
cib[12122]: 2008/06/13_14:05:38 debug: log_data_element: cib:diff: +             <lrm_rsc_op id="dummy_monitor_10000" operation="monitor" crm-debug-origin="do_update_resource" transition-key="18:1:0:81471eca-6a9e-410b-b2b2-db41164a8f06" transition-magic="0:0;18:1:0:81471eca-6a9e-410b-b2b2-db41164a8f06" call-id="6" crm_feature_set="3.0" rc-code="0" op-status="0" interval="10000" last-run="1213333536" last-rc-change="1213333536" exec-time="40" queue-time="0" op-digest="02a5bcf940fc8d3239701acb11438d6a" __crm_diff_marker__="added:top"/>
cib[12122]: 2008/06/13_14:05:38 debug: log_data_element: cib:diff: +           </lrm_resource>
cib[12122]: 2008/06/13_14:05:38 debug: log_data_element: cib:diff: +         </lrm_resources>
cib[12122]: 2008/06/13_14:05:38 debug: log_data_element: cib:diff: +       </lrm>
cib[12122]: 2008/06/13_14:05:38 debug: log_data_element: cib:diff: +     </node_state>
cib[12122]: 2008/06/13_14:05:38 debug: log_data_element: cib:diff: +   </status>
cib[12122]: 2008/06/13_14:05:38 debug: log_data_element: cib:diff: + </cib>
cib[12122]: 2008/06/13_14:05:38 debug: send_peer_reply: Sending update diff 0.8.10 -> 0.8.11
cib[12122]: 2008/06/13_14:05:38 info: validate_xml: Validating configuration with transitional-0.6: /usr/share/heartbeat/crm-transitional.dtd
cib[12122]: 2008/06/13_14:05:38 info: validate_with: Validating with: /usr/share/heartbeat/crm-transitional.dtd (type=1)
crmd[12126]: 2008/06/13_14:05:38 debug: te_update_diff: Processing diff (cib_modify): 0.8.11 -> 0.8.12 (S_TRANSITION_ENGINE)
cib[12122]: 2008/06/13_14:05:38 debug: log_data_element: cib:diff: - <cib num_updates="11"/>
crmd[12126]: 2008/06/13_14:05:38 info: match_graph_event: Action stonith1-ssh:0_monitor_10000 (10) confirmed on node-a (rc=0)
cib[12122]: 2008/06/13_14:05:38 debug: log_data_element: cib:diff: + <cib num_updates="12">
crmd[12126]: 2008/06/13_14:05:38 debug: te_graph_trigger: Invoking the TE graph in state S_TRANSITION_ENGINE
cib[12122]: 2008/06/13_14:05:38 debug: log_data_element: cib:diff: +   <status>
crmd[12126]: 2008/06/13_14:05:38 debug: run_graph: Transition 1: (Complete=14, Pending=1, Fired=0, Skipped=0, Incomplete=0)
cib[12122]: 2008/06/13_14:05:38 debug: log_data_element: cib:diff: +     <node_state id="8029f8c4-1f03-4695-a78a-29c02fdd399c">
cib[12122]: 2008/06/13_14:05:38 debug: log_data_element: cib:diff: +       <lrm id="8029f8c4-1f03-4695-a78a-29c02fdd399c">
cib[12122]: 2008/06/13_14:05:38 debug: log_data_element: cib:diff: +         <lrm_resources>
cib[12122]: 2008/06/13_14:05:38 debug: log_data_element: cib:diff: +           <lrm_resource id="stonith1-ssh:0">
cib[12122]: 2008/06/13_14:05:38 debug: log_data_element: cib:diff: +             <lrm_rsc_op id="stonith1-ssh:0_monitor_10000" operation="monitor" crm-debug-origin="do_update_resource" transition-key="10:1:0:81471eca-6a9e-410b-b2b2-db41164a8f06" transition-magic="0:0;10:1:0:81471eca-6a9e-410b-b2b2-db41164a8f06" call-id="7" crm_feature_set="3.0" rc-code="0" op-status="0" interval="10000" last-run="1213333536" last-rc-change="1213333536" exec-time="60" queue-time="0" op-digest="77cd1a2133a839e048bab34ebe42de05" __crm_diff_marker__="added:top"/>
cib[12122]: 2008/06/13_14:05:38 debug: log_data_element: cib:diff: +           </lrm_resource>
cib[12122]: 2008/06/13_14:05:38 debug: log_data_element: cib:diff: +         </lrm_resources>
cib[12122]: 2008/06/13_14:05:38 debug: log_data_element: cib:diff: +       </lrm>
cib[12122]: 2008/06/13_14:05:38 debug: log_data_element: cib:diff: +     </node_state>
cib[12122]: 2008/06/13_14:05:38 debug: log_data_element: cib:diff: +   </status>
cib[12122]: 2008/06/13_14:05:38 debug: log_data_element: cib:diff: + </cib>
cib[12122]: 2008/06/13_14:05:38 debug: send_peer_reply: Sending update diff 0.8.11 -> 0.8.12
stonithd[12183]: 2008/06/13_14:05:38 debug: external_run_cmd: '/usr/lib64/stonith/plugins/external/ssh status' output: 
stonithd[12183]: 2008/06/13_14:05:38 debug: external_status: running 'ssh status' returned 0
stonithd[12124]: 2008/06/13_14:05:38 debug: Child process external_stonith1-ssh:1_monitor [12183] exited, its exit code: 0 when signo=0.
stonithd[12124]: 2008/06/13_14:05:38 debug: stonith1-ssh:1's (external/ssh) op monitor finished. op_result=0
stonithd[12124]: 2008/06/13_14:05:38 debug: client STONITH_RA_EXEC_12182 (pid=12182) signed off
crmd[12126]: 2008/06/13_14:05:38 info: process_lrm_event: LRM operation stonith1-ssh:1_monitor_10000 (call=5, rc=0) complete 
crmd[12126]: 2008/06/13_14:05:38 debug: do_update_resource: Sent resource state update message: 37
cib[12122]: 2008/06/13_14:05:38 info: validate_xml: Validating configuration with transitional-0.6: /usr/share/heartbeat/crm-transitional.dtd
cib[12122]: 2008/06/13_14:05:38 info: validate_with: Validating with: /usr/share/heartbeat/crm-transitional.dtd (type=1)
crmd[12126]: 2008/06/13_14:05:38 debug: te_update_diff: Processing diff (cib_modify): 0.8.12 -> 0.8.13 (S_TRANSITION_ENGINE)
cib[12122]: 2008/06/13_14:05:38 debug: log_data_element: cib:diff: - <cib num_updates="12"/>
crmd[12126]: 2008/06/13_14:05:38 info: match_graph_event: Action stonith1-ssh:1_monitor_10000 (12) confirmed on node-b (rc=0)
cib[12122]: 2008/06/13_14:05:38 debug: log_data_element: cib:diff: + <cib num_updates="13">
crmd[12126]: 2008/06/13_14:05:38 debug: cib_rsc_callback: Resource update 37 complete: rc=0
cib[12122]: 2008/06/13_14:05:38 debug: log_data_element: cib:diff: +   <status>
crmd[12126]: 2008/06/13_14:05:38 debug: te_graph_trigger: Invoking the TE graph in state S_TRANSITION_ENGINE
cib[12122]: 2008/06/13_14:05:38 debug: log_data_element: cib:diff: +     <node_state id="db8f2da4-a7fb-40bf-bf14-befe4af11db7">
crmd[12126]: 2008/06/13_14:05:38 debug: run_graph: ====================================================
cib[12122]: 2008/06/13_14:05:38 debug: log_data_element: cib:diff: +       <lrm id="db8f2da4-a7fb-40bf-bf14-befe4af11db7">
crmd[12126]: 2008/06/13_14:05:38 info: run_graph: Transition 1: (Complete=15, Pending=0, Fired=0, Skipped=0, Incomplete=0)
cib[12122]: 2008/06/13_14:05:38 debug: log_data_element: cib:diff: +         <lrm_resources>
crmd[12126]: 2008/06/13_14:05:38 debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE
cib[12122]: 2008/06/13_14:05:38 debug: log_data_element: cib:diff: +           <lrm_resource id="stonith1-ssh:1">
crmd[12126]: 2008/06/13_14:05:38 debug: stop_te_timer: Stopping global timer
cib[12122]: 2008/06/13_14:05:38 debug: log_data_element: cib:diff: +             <lrm_rsc_op id="stonith1-ssh:1_monitor_10000" operation="monitor" crm-debug-origin="do_update_resource" transition-key="12:1:0:81471eca-6a9e-410b-b2b2-db41164a8f06" transition-magic="0:0;12:1:0:81471eca-6a9e-410b-b2b2-db41164a8f06" call-id="5" crm_feature_set="3.0" rc-code="0" op-status="0" interval="10000" last-run="1213333536" last-rc-change="1213333536" exec-time="1020" queue-time="0" op-digest="77cd1a2133a839e048bab34ebe42de05" __crm_diff_marker__="added:top"/>
crmd[12126]: 2008/06/13_14:05:38 info: notify_crmd: Transition 1 status: done - <null>
cib[12122]: 2008/06/13_14:05:38 debug: log_data_element: cib:diff: +           </lrm_resource>
crmd[12126]: 2008/06/13_14:05:38 debug: register_fsa_input_adv: notify_crmd appended FSA input 29 (I_TE_SUCCESS) (cause=C_FSA_INTERNAL) without data
cib[12122]: 2008/06/13_14:05:38 debug: log_data_element: cib:diff: +         </lrm_resources>
crmd[12126]: 2008/06/13_14:05:38 debug: s_crmd_fsa: Processing I_TE_SUCCESS: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
cib[12122]: 2008/06/13_14:05:38 debug: log_data_element: cib:diff: +       </lrm>
crmd[12126]: 2008/06/13_14:05:38 debug: do_fsa_action: actions:trace: 	// A_LOG   
cib[12122]: 2008/06/13_14:05:38 debug: log_data_element: cib:diff: +     </node_state>
crmd[12126]: 2008/06/13_14:05:38 info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
cib[12122]: 2008/06/13_14:05:38 debug: log_data_element: cib:diff: +   </status>
crmd[12126]: 2008/06/13_14:05:38 debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
cib[12122]: 2008/06/13_14:05:38 debug: log_data_element: cib:diff: + </cib>
crmd[12126]: 2008/06/13_14:05:38 debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
cib[12122]: 2008/06/13_14:05:38 debug: send_peer_reply: Sending update diff 0.8.12 -> 0.8.13
crmd[12126]: 2008/06/13_14:05:38 debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
lrmd[12193]: 2008/06/13_14:05:48 debug: stonithd_signon: creating connection
lrmd[12193]: 2008/06/13_14:05:48 debug: sending out the signon msg.
stonithd[12124]: 2008/06/13_14:05:48 debug: client STONITH_RA_EXEC_12193 (pid=12193) succeeded to signon to stonithd.
lrmd[12193]: 2008/06/13_14:05:48 debug: signed on to stonithd.
lrmd[12193]: 2008/06/13_14:05:48 debug: waiting for the stonithRA reply msg.
stonithd[12124]: 2008/06/13_14:05:48 debug: client STONITH_RA_EXEC_12193 [pid: 12193] requests a resource operation monitor on stonith1-ssh:1 (external/ssh)
stonithd[12194]: 2008/06/13_14:05:48 debug: external_status: called.
stonithd[12194]: 2008/06/13_14:05:48 debug: external_run_cmd: Calling '/usr/lib64/stonith/plugins/external/ssh status'
lrmd[12193]: 2008/06/13_14:05:48 debug: a stonith RA operation queue to run, call_id=12194.
lrmd[12193]: 2008/06/13_14:05:48 debug: stonithd_receive_ops_result: begin
stonithd[12194]: 2008/06/13_14:05:49 debug: external_run_cmd: '/usr/lib64/stonith/plugins/external/ssh status' output: 
stonithd[12194]: 2008/06/13_14:05:49 debug: external_status: running 'ssh status' returned 0
stonithd[12124]: 2008/06/13_14:05:49 debug: Child process external_stonith1-ssh:1_monitor [12194] exited, its exit code: 0 when signo=0.
stonithd[12124]: 2008/06/13_14:05:49 debug: stonith1-ssh:1's (external/ssh) op monitor finished. op_result=0
stonithd[12124]: 2008/06/13_14:05:49 debug: client STONITH_RA_EXEC_12193 (pid=12193) signed off
heartbeat[12113]: 2008/06/13_14:05:57 WARN: Managed /usr/lib64/heartbeat/stonithd process 12124 killed by signal 9 [SIGKILL - Kill, unblockable].
crmd[12126]: 2008/06/13_14:05:57 ERROR: stonithd_op_result_ready: not signed on
heartbeat[12113]: 2008/06/13_14:05:57 debug: G_remove_client(pid=12124, reason='died' gsource=0x9a5278) {
crmd[12126]: 2008/06/13_14:05:57 ERROR: tengine_stonith_connection_destroy: Fencing daemon has left us
heartbeat[12113]: 2008/06/13_14:05:57 debug: api_remove_client_int: removing pid [12124] reason: died
crmd[12126]: 2008/06/13_14:05:57 info: te_connect_stonith: Attempting connection to fencing daemon...
heartbeat[12113]: 2008/06/13_14:05:57 debug: api_send_client: client 12124 died
heartbeat[12113]: 2008/06/13_14:05:57 debug: }/*G_remove_client;*/
heartbeat[12113]: 2008/06/13_14:05:57 ERROR: Client /usr/lib64/heartbeat/stonithd (pid=12124) killed by signal 9.
heartbeat[12113]: 2008/06/13_14:05:57 ERROR: Respawning client "/usr/lib64/heartbeat/stonithd":
heartbeat[12113]: 2008/06/13_14:05:57 info: Starting child client "/usr/lib64/heartbeat/stonithd" (0,0)
heartbeat[12206]: 2008/06/13_14:05:57 info: Starting "/usr/lib64/heartbeat/stonithd" as uid 0  gid 0 (pid 12206)
stonithd[12206]: 2008/06/13_14:05:57 info: G_main_add_SignalHandler: Added signal handler for signal 10
stonithd[12206]: 2008/06/13_14:05:57 info: G_main_add_SignalHandler: Added signal handler for signal 12
stonithd[12206]: 2008/06/13_14:05:57 debug: pid 12206 locked in memory.
heartbeat[12113]: 2008/06/13_14:05:57 debug: APIregistration_dispatch() {
heartbeat[12113]: 2008/06/13_14:05:57 debug: process_registerevent() {
heartbeat[12113]: 2008/06/13_14:05:57 debug: client->gsource = 0x9a86d8
heartbeat[12113]: 2008/06/13_14:05:57 debug: }/*process_registerevent*/;
heartbeat[12113]: 2008/06/13_14:05:57 debug: }/*APIregistration_dispatch*/;
heartbeat[12113]: 2008/06/13_14:05:57 debug: Checking client authorization for client stonithd (0:0)
heartbeat[12113]: 2008/06/13_14:05:57 debug: create_seq_snapshot_table:no missing packets found for node node-a
heartbeat[12113]: 2008/06/13_14:05:57 debug: create_seq_snapshot_table:no missing packets found for node node-b
heartbeat[12113]: 2008/06/13_14:05:57 debug: Signing on API client 12206 (stonithd)
stonithd[12206]: 2008/06/13_14:05:57 info: register_heartbeat_conn: Hostname: node-b
stonithd[12206]: 2008/06/13_14:05:57 info: register_heartbeat_conn: UUID: db8f2da4-a7fb-40bf-bf14-befe4af11db7
stonithd[12206]: 2008/06/13_14:05:57 debug: Setting message filter mode
stonithd[12206]: 2008/06/13_14:05:58 debug: apichan=0x1fea2118
stonithd[12206]: 2008/06/13_14:05:58 debug: callback_chan=0x1fea2398
stonithd[12206]: 2008/06/13_14:05:58 notice: /usr/lib64/heartbeat/stonithd start up successfully.
stonithd[12206]: 2008/06/13_14:05:58 info: G_main_add_SignalHandler: Added signal handler for signal 17
crmd[12126]: 2008/06/13_14:05:58 debug: stonithd_signon: creating connection
crmd[12126]: 2008/06/13_14:05:58 debug: sending out the signon msg.
crmd[12126]: 2008/06/13_14:05:58 debug: signed on to stonithd.
stonithd[12206]: 2008/06/13_14:05:58 debug: client tengine (pid=12126) succeeded to signon to stonithd.
crmd[12126]: 2008/06/13_14:05:58 info: te_connect_stonith: Connected
lrmd[12207]: 2008/06/13_14:05:59 debug: stonithd_signon: creating connection
lrmd[12207]: 2008/06/13_14:05:59 debug: sending out the signon msg.
stonithd[12206]: 2008/06/13_14:05:59 debug: client STONITH_RA_EXEC_12207 (pid=12207) succeeded to signon to stonithd.
lrmd[12207]: 2008/06/13_14:05:59 debug: signed on to stonithd.
lrmd[12207]: 2008/06/13_14:05:59 debug: waiting for the stonithRA reply msg.
stonithd[12206]: 2008/06/13_14:05:59 debug: client STONITH_RA_EXEC_12207 [pid: 12207] requests a resource operation monitor on stonith1-ssh:1 (external/ssh)
stonithd[12206]: 2008/06/13_14:05:59 debug: stonithRA_monitor: stonith1-ssh:1 is not started.
stonithd[12206]: 2008/06/13_14:05:59 debug: Child process unknown_stonith1-ssh:1_monitor [12208] exited, its exit code: 7 when signo=0.
stonithd[12206]: 2008/06/13_14:05:59 debug: stonith1-ssh:1's (external/ssh) op monitor finished. op_result=7
lrmd[12207]: 2008/06/13_14:05:59 debug: a stonith RA operation queue to run, call_id=12208.
lrmd[12207]: 2008/06/13_14:05:59 debug: stonithd_receive_ops_result: begin
stonithd[12206]: 2008/06/13_14:05:59 debug: client STONITH_RA_EXEC_12207 (pid=12207) signed off
crmd[12126]: 2008/06/13_14:05:59 info: process_lrm_event: LRM operation stonith1-ssh:1_monitor_10000 (call=5, rc=7) complete 
crmd[12126]: 2008/06/13_14:05:59 debug: do_update_resource: Sent resource state update message: 38
cib[12122]: 2008/06/13_14:05:59 info: validate_xml: Validating configuration with transitional-0.6: /usr/share/heartbeat/crm-transitional.dtd
cib[12122]: 2008/06/13_14:05:59 info: validate_with: Validating with: /usr/share/heartbeat/crm-transitional.dtd (type=1)
crmd[12126]: 2008/06/13_14:05:59 debug: te_update_diff: Processing diff (cib_modify): 0.8.13 -> 0.8.14 (S_IDLE)
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: - <cib num_updates="13">
crmd[12126]: 2008/06/13_14:05:59 info: process_graph_event: Action stonith1-ssh:1_monitor_10000 arrived after a completed transition
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: -   <status>
crmd[12126]: 2008/06/13_14:05:59 debug: abort_transition_graph: process_graph_event:556 - Triggered graph processing (complete=1) : Inactive graph
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: -     <node_state id="db8f2da4-a7fb-40bf-bf14-befe4af11db7">
crmd[12126]: 2008/06/13_14:05:59 debug: register_fsa_input_adv: abort_transition_graph appended FSA input 30 (I_PE_CALC) (cause=C_FSA_INTERNAL) without data
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: -       <lrm id="db8f2da4-a7fb-40bf-bf14-befe4af11db7">
crmd[12126]: 2008/06/13_14:05:59 WARN: update_failcount: Updating failcount for stonith1-ssh:1 on db8f2da4-a7fb-40bf-bf14-befe4af11db7 after failed monitor: rc=7 (update=value++, time=1213333559)
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: -         <lrm_resources>
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: -           <lrm_resource id="stonith1-ssh:1">
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: -             <lrm_rsc_op transition-magic="0:0;12:1:0:81471eca-6a9e-410b-b2b2-db41164a8f06" rc-code="0" last-run="1213333536" last-rc-change="1213333536" exec-time="1020" id="stonith1-ssh:1_monitor_10000"/>
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: -           </lrm_resource>
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: -         </lrm_resources>
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: -       </lrm>
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: -     </node_state>
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: -   </status>
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: - </cib>
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: + <cib num_updates="14">
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: +   <status>
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: +     <node_state id="db8f2da4-a7fb-40bf-bf14-befe4af11db7">
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: +       <lrm id="db8f2da4-a7fb-40bf-bf14-befe4af11db7">
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: +         <lrm_resources>
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: +           <lrm_resource id="stonith1-ssh:1">
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: +             <lrm_rsc_op transition-magic="0:7;12:1:0:81471eca-6a9e-410b-b2b2-db41164a8f06" rc-code="7" last-run="1213333558" last-rc-change="1213333558" exec-time="10" id="stonith1-ssh:1_monitor_10000"/>
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: +           </lrm_resource>
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: +         </lrm_resources>
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: +       </lrm>
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: +     </node_state>
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: +   </status>
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: + </cib>
cib[12122]: 2008/06/13_14:05:59 debug: send_peer_reply: Sending update diff 0.8.13 -> 0.8.14
cib[12122]: 2008/06/13_14:05:59 info: validate_xml: Validating configuration with transitional-0.6: /usr/share/heartbeat/crm-transitional.dtd
cib[12122]: 2008/06/13_14:05:59 info: validate_with: Validating with: /usr/share/heartbeat/crm-transitional.dtd (type=1)
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: - <cib num_updates="14"/>
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: + <cib num_updates="15">
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: +   <status>
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: +     <node_state id="db8f2da4-a7fb-40bf-bf14-befe4af11db7">
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: +       <transient_attributes id="db8f2da4-a7fb-40bf-bf14-befe4af11db7">
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: +         <instance_attributes id="status-db8f2da4-a7fb-40bf-bf14-befe4af11db7">
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: +           <attributes>
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: +             <nvpair id="status-db8f2da4-a7fb-40bf-bf14-befe4af11db7-fail-count-stonith1-ssh:1" name="fail-count-stonith1-ssh:1" value="1" __crm_diff_marker__="added:top"/>
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: +           </attributes>
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: +         </instance_attributes>
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: +       </transient_attributes>
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: +     </node_state>
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: +   </status>
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: + </cib>
cib[12122]: 2008/06/13_14:05:59 debug: send_peer_reply: Sending update diff 0.8.14 -> 0.8.15
crmd[12126]: 2008/06/13_14:05:59 debug: cib_rsc_callback: Resource update 38 complete: rc=0
crmd[12126]: 2008/06/13_14:05:59 debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_IDLE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
crmd[12126]: 2008/06/13_14:05:59 info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
crmd[12126]: 2008/06/13_14:05:59 info: do_state_transition: All 2 cluster nodes are eligible to run resources.
cib[12122]: 2008/06/13_14:05:59 info: validate_xml: Validating configuration with transitional-0.6: /usr/share/heartbeat/crm-transitional.dtd
crmd[12126]: 2008/06/13_14:05:59 debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
cib[12122]: 2008/06/13_14:05:59 info: validate_with: Validating with: /usr/share/heartbeat/crm-transitional.dtd (type=1)
crmd[12126]: 2008/06/13_14:05:59 debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
crmd[12126]: 2008/06/13_14:05:59 debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
crmd[12126]: 2008/06/13_14:05:59 debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
crmd[12126]: 2008/06/13_14:05:59 debug: do_pe_invoke: Requesting the current CIB: S_POLICY_ENGINE
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: - <cib num_updates="15"/>
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: + <cib num_updates="16">
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: +   <status>
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: +     <node_state id="db8f2da4-a7fb-40bf-bf14-befe4af11db7">
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: +       <transient_attributes id="db8f2da4-a7fb-40bf-bf14-befe4af11db7">
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: +         <instance_attributes id="status-db8f2da4-a7fb-40bf-bf14-befe4af11db7">
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: +           <attributes>
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: +             <nvpair id="status-db8f2da4-a7fb-40bf-bf14-befe4af11db7-last-failure-stonith1-ssh:1" name="last-failure-stonith1-ssh:1" value="1213333559" __crm_diff_marker__="added:top"/>
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: +           </attributes>
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: +         </instance_attributes>
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: +       </transient_attributes>
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: +     </node_state>
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: +   </status>
cib[12122]: 2008/06/13_14:05:59 debug: log_data_element: cib:diff: + </cib>
cib[12122]: 2008/06/13_14:05:59 debug: send_peer_reply: Sending update diff 0.8.15 -> 0.8.16
crmd[12126]: 2008/06/13_14:05:59 debug: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1213333559-29, seq=2, quorate=1
pengine[12132]: 2008/06/13_14:06:00 WARN: process_pe_message: Your current configuration only conforms to transitional-0.6
pengine[12132]: 2008/06/13_14:06:00 WARN: process_pe_message: Please use XXX to upgrade pacemaker-0.7
pengine[12132]: 2008/06/13_14:06:00 debug: update_validation: Testing 'transitional-0.6' validation
pengine[12132]: 2008/06/13_14:06:00 info: validate_with: Validating with: /usr/share/heartbeat/crm-transitional.dtd (type=1)
pengine[12132]: 2008/06/13_14:06:00 notice: update_validation: Upgrading transitional-0.6-style configuration to pacemaker-0.7 with /usr/share/heartbeat/upgrade.xsl
pengine[12132]: 2008/06/13_14:06:00 info: validate_with: Validating with: /usr/share/heartbeat/pacemaker-0.7.rng (type=2)
pengine[12132]: 2008/06/13_14:06:00 info: update_validation: Transformation /usr/share/heartbeat/upgrade.xsl successful
pengine[12132]: 2008/06/13_14:06:00 notice: update_validation: Upgraded from transitional-0.6 to pacemaker-0.7 validation
pengine[12132]: 2008/06/13_14:06:00 WARN: process_pe_message: Your configuration was internally updated to pacemaker-0.7
pengine[12132]: 2008/06/13_14:06:00 debug: unpack_config: Default action timeout: 120s
pengine[12132]: 2008/06/13_14:06:00 debug: unpack_config: Default stickiness: 1000000
pengine[12132]: 2008/06/13_14:06:00 debug: unpack_config: Stop all active resources: false
pengine[12132]: 2008/06/13_14:06:00 debug: unpack_config: Default failure timeout: 0
pengine[12132]: 2008/06/13_14:06:00 debug: unpack_config: Default migration threshold: 1
pengine[12132]: 2008/06/13_14:06:00 debug: unpack_config: STONITH of failed nodes is enabled
pengine[12132]: 2008/06/13_14:06:00 debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
pengine[12132]: 2008/06/13_14:06:00 notice: unpack_config: On loss of CCM Quorum: Ignore
pengine[12132]: 2008/06/13_14:06:00 WARN: unpack_nodes: Blind faith: not fencing unseen nodes
pengine[12132]: 2008/06/13_14:06:00 info: determine_online_status: Node node-b is online
pengine[12132]: 2008/06/13_14:06:00 info: unpack_rsc_op: Remapping stonith1-ssh-1_monitor_10000 (rc=7) on node-b to an ERROR (expected 0)
pengine[12132]: 2008/06/13_14:06:00 WARN: unpack_rsc_op: Processing failed op stonith1-ssh-1_monitor_10000 on node-b: Error
pengine[12132]: 2008/06/13_14:06:00 info: determine_online_status: Node node-a is online
pengine[12132]: 2008/06/13_14:06:00 info: get_failcount: stonith1-ssh:1 has failed 1 times on node-b
pengine[12132]: 2008/06/13_14:06:00 WARN: common_apply_stickiness: Forcing stonith1 away from node-b after 1 failures (max=1)
pengine[12132]: 2008/06/13_14:06:00 notice: clone_print: Clone Set: stonith1
pengine[12132]: 2008/06/13_14:06:00 notice: native_print:     stonith1-ssh:0	(stonith:external/ssh):	Started node-a
pengine[12132]: 2008/06/13_14:06:00 notice: native_print:     stonith1-ssh:1	(stonith:external/ssh):	Started node-b FAILED
pengine[12132]: 2008/06/13_14:06:00 notice: native_print: dummy	(ocf::heartbeat:Dummy):	Started node-a
pengine[12132]: 2008/06/13_14:06:00 debug: native_assign_node: Assigning node-a to stonith1-ssh:0
pengine[12132]: 2008/06/13_14:06:00 debug: native_assign_node: All nodes for resource stonith1-ssh:1 are unavailable, unclean or shutting down
pengine[12132]: 2008/06/13_14:06:00 WARN: native_color: Resource stonith1-ssh:1 cannot run anywhere
pengine[12132]: 2008/06/13_14:06:00 debug: clone_color: Allocated 1 stonith1 instances of a possible 2
pengine[12132]: 2008/06/13_14:06:00 debug: native_assign_node: Assigning node-a to dummy
pengine[12132]: 2008/06/13_14:06:00 notice: NoRoleChange: Leave resource stonith1-ssh:0	(Started node-a)
pengine[12132]: 2008/06/13_14:06:00 notice: NoRoleChange: Stop resource stonith1-ssh:1	(Started node-b)
pengine[12132]: 2008/06/13_14:06:00 notice: StopRsc:   node-b	Stop stonith1-ssh:1
pengine[12132]: 2008/06/13_14:06:00 notice: NoRoleChange: Leave resource dummy	(Started node-a)
pengine[12132]: 2008/06/13_14:06:00 debug: get_last_sequence: Series file /var/lib/heartbeat/pengine/pe-warn.last does not exist
crmd[12126]: 2008/06/13_14:06:00 debug: register_fsa_input_adv: route_message appended FSA input 31 (I_PE_SUCCESS) (cause=C_IPC_MESSAGE) with data
crmd[12126]: 2008/06/13_14:06:00 debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=route_message ]
crmd[12126]: 2008/06/13_14:06:00 debug: do_fsa_action: actions:trace: 	// A_LOG   
crmd[12126]: 2008/06/13_14:06:00 info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=route_message ]
crmd[12126]: 2008/06/13_14:06:00 debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
crmd[12126]: 2008/06/13_14:06:00 debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
crmd[12126]: 2008/06/13_14:06:00 debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
crmd[12126]: 2008/06/13_14:06:00 debug: do_fsa_action: actions:trace: 	// A_TE_INVOKE
crmd[12126]: 2008/06/13_14:06:00 debug: stop_te_timer: Stopping global timer
crmd[12126]: 2008/06/13_14:06:00 debug: stop_te_timer: global timer was already stopped
crmd[12126]: 2008/06/13_14:06:00 info: unpack_graph: Unpacked transition 2: 4 actions in 4 synapses
crmd[12126]: 2008/06/13_14:06:00 info: do_te_invoke: Processing graph 2 derived from /var/lib/heartbeat/pengine/pe-warn-0.bz2
crmd[12126]: 2008/06/13_14:06:00 debug: start_global_timer: Starting abort timer: 60000ms
crmd[12126]: 2008/06/13_14:06:00 debug: te_graph_trigger: Invoking the TE graph in state S_TRANSITION_ENGINE
crmd[12126]: 2008/06/13_14:06:00 debug: initiate_action: Action 13: Increasing IDLE timer to 240000
crmd[12126]: 2008/06/13_14:06:00 info: te_pseudo_action: Pseudo action 13 fired and confirmed
crmd[12126]: 2008/06/13_14:06:00 info: te_pseudo_action: Pseudo action 5 fired and confirmed
crmd[12126]: 2008/06/13_14:06:00 debug: run_graph: Transition 2: (Complete=0, Pending=0, Fired=2, Skipped=0, Incomplete=2)
crmd[12126]: 2008/06/13_14:06:00 debug: te_graph_trigger: Restarting TE timer
crmd[12126]: 2008/06/13_14:06:00 debug: stop_te_timer: Stopping global timer
crmd[12126]: 2008/06/13_14:06:00 debug: start_global_timer: Starting abort timer: 240000ms
crmd[12126]: 2008/06/13_14:06:00 debug: te_graph_trigger: Invoking the TE graph in state S_TRANSITION_ENGINE
crmd[12126]: 2008/06/13_14:06:00 info: send_rsc_command: Initiating action 2: stop stonith1-ssh:1_stop_0 on node-b
crmd[12126]: 2008/06/13_14:06:00 debug: send_rsc_command: Action 2: Increasing transition 2 timeout to 300000 (2*120000 + 60000)
crmd[12126]: 2008/06/13_14:06:00 debug: run_graph: Transition 2: (Complete=2, Pending=1, Fired=1, Skipped=0, Incomplete=1)
crmd[12126]: 2008/06/13_14:06:00 debug: te_graph_trigger: Restarting TE timer
crmd[12126]: 2008/06/13_14:06:00 debug: stop_te_timer: Stopping global timer
crmd[12126]: 2008/06/13_14:06:00 debug: start_global_timer: Starting abort timer: 300000ms
pengine[12132]: 2008/06/13_14:06:00 WARN: process_pe_message: Transition 2: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/heartbeat/pengine/pe-warn-0.bz2
pengine[12132]: 2008/06/13_14:06:00 info: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
crmd[12126]: 2008/06/13_14:06:00 debug: cancel_op: Cancelling op 5 for stonith1-ssh:1 (stonith1-ssh:1:5)
crmd[12126]: 2008/06/13_14:06:00 info: do_lrm_rsc_op: Performing op=stonith1-ssh:1_stop_0 key=2:2:0:81471eca-6a9e-410b-b2b2-db41164a8f06)
lrmd[12123]: 2008/06/13_14:06:00 debug: cancel_op: operation monitor[5] on stonith::external/ssh::stonith1-ssh:1 for client 12126, its parameters: CRM_meta_interval=[10000] hostlist=[node-a node-b] CRM_meta_timeout=[30000] CRM_meta_clone_max=[2] crm_feature_set=[3.0] CRM_meta_globally_unique=[false] CRM_meta_name=[monitor] CRM_meta_clone=[1] CRM_meta_clone_node_max=[1]  cancelled
lrmd[12123]: 2008/06/13_14:06:00 debug: on_msg_perform_op: add an operation operation stop[6] on stonith::external/ssh::stonith1-ssh:1 for client 12126, its parameters: CRM_meta_timeout=[120000] CRM_meta_clone_max=[2] crm_feature_set=[3.0] CRM_meta_globally_unique=[false] CRM_meta_clone=[1] CRM_meta_clone_node_max=[1]  to the operation list.
lrmd[12123]: 2008/06/13_14:06:00 info: rsc:stonith1-ssh:1: stop
lrmd[12209]: 2008/06/13_14:06:00 debug: stonithd_signon: creating connection
lrmd[12209]: 2008/06/13_14:06:00 debug: sending out the signon msg.
crmd[12126]: 2008/06/13_14:06:00 debug: do_lrm_rsc_op: Recording pending op: 6 - stonith1-ssh:1_stop_0 stonith1-ssh:1:6
crmd[12126]: 2008/06/13_14:06:00 info: process_lrm_event: LRM operation stonith1-ssh:1_monitor_10000 (call=5, rc=-2) Cancelled 
crmd[12126]: 2008/06/13_14:06:00 debug: process_lrm_event: Op stonith1-ssh:1_monitor_10000 (call=5): no delete event required
crmd[12126]: 2008/06/13_14:06:00 debug: process_lrm_event: Op stonith1-ssh:1_monitor_10000 (call=5, stop_id=stonith1-ssh:1:5): Confirmed
stonithd[12206]: 2008/06/13_14:06:00 debug: client STONITH_RA_EXEC_12209 (pid=12209) succeeded to signon to stonithd.
lrmd[12209]: 2008/06/13_14:06:00 debug: signed on to stonithd.
lrmd[12209]: 2008/06/13_14:06:00 info: Try to stop STONITH resource <rsc_id=stonith1-ssh:1> : Device=external/ssh
lrmd[12209]: 2008/06/13_14:06:00 debug: waiting for the stonithRA reply msg.
stonithd[12206]: 2008/06/13_14:06:00 debug: client STONITH_RA_EXEC_12209 [pid: 12209] requests a resource operation stop on stonith1-ssh:1 (external/ssh)
stonithd[12206]: 2008/06/13_14:06:00 notice: try to stop a resource stonith1-ssh:1 who is not in started resource queue.
stonithd[12206]: 2008/06/13_14:06:00 debug: Child process external/ssh_stonith1-ssh:1_stop [12210] exited, its exit code: 0 when signo=0.
lrmd[12209]: 2008/06/13_14:06:00 debug: a stonith RA operation queue to run, call_id=12210.
stonithd[12206]: 2008/06/13_14:06:00 debug: stonith1-ssh:1's (external/ssh) op stop finished. op_result=0
lrmd[12209]: 2008/06/13_14:06:00 debug: stonithd_receive_ops_result: begin
lrmd[12123]: 2008/06/13_14:06:00 info: Managed stonith1-ssh:1:stop process 12209 exited with return code 0.
stonithd[12206]: 2008/06/13_14:06:00 debug: client STONITH_RA_EXEC_12209 (pid=12209) signed off
crmd[12126]: 2008/06/13_14:06:00 info: process_lrm_event: LRM operation stonith1-ssh:1_stop_0 (call=6, rc=0) complete 
crmd[12126]: 2008/06/13_14:06:00 debug: do_update_resource: Sent resource state update message: 44
crmd[12126]: 2008/06/13_14:06:00 debug: process_lrm_event: Op stonith1-ssh:1_stop_0 (call=6, stop_id=stonith1-ssh:1:6): Confirmed
cib[12122]: 2008/06/13_14:06:00 info: validate_xml: Validating configuration with transitional-0.6: /usr/share/heartbeat/crm-transitional.dtd
cib[12122]: 2008/06/13_14:06:00 info: validate_with: Validating with: /usr/share/heartbeat/crm-transitional.dtd (type=1)
crmd[12126]: 2008/06/13_14:06:00 debug: te_update_diff: Processing diff (cib_modify): 0.8.16 -> 0.8.17 (S_TRANSITION_ENGINE)
cib[12122]: 2008/06/13_14:06:00 debug: log_data_element: cib:diff: - <cib num_updates="16"/>
crmd[12126]: 2008/06/13_14:06:00 info: match_graph_event: Action stonith1-ssh:1_stop_0 (2) confirmed on node-b (rc=0)
cib[12122]: 2008/06/13_14:06:00 debug: log_data_element: cib:diff: + <cib num_updates="17">
crmd[12126]: 2008/06/13_14:06:00 debug: cib_rsc_callback: Resource update 44 complete: rc=0
cib[12122]: 2008/06/13_14:06:00 debug: log_data_element: cib:diff: +   <status>
crmd[12126]: 2008/06/13_14:06:00 debug: te_graph_trigger: Invoking the TE graph in state S_TRANSITION_ENGINE
cib[12122]: 2008/06/13_14:06:00 debug: log_data_element: cib:diff: +     <node_state id="db8f2da4-a7fb-40bf-bf14-befe4af11db7">
crmd[12126]: 2008/06/13_14:06:00 info: te_pseudo_action: Pseudo action 14 fired and confirmed
cib[12122]: 2008/06/13_14:06:00 debug: log_data_element: cib:diff: +       <lrm id="db8f2da4-a7fb-40bf-bf14-befe4af11db7">
crmd[12126]: 2008/06/13_14:06:00 debug: run_graph: Transition 2: (Complete=3, Pending=0, Fired=1, Skipped=0, Incomplete=0)
cib[12122]: 2008/06/13_14:06:00 debug: log_data_element: cib:diff: +         <lrm_resources>
crmd[12126]: 2008/06/13_14:06:00 debug: te_graph_trigger: Restarting TE timer
cib[12122]: 2008/06/13_14:06:00 debug: log_data_element: cib:diff: +           <lrm_resource id="stonith1-ssh:1">
crmd[12126]: 2008/06/13_14:06:00 debug: stop_te_timer: Stopping global timer
cib[12122]: 2008/06/13_14:06:00 debug: log_data_element: cib:diff: +             <lrm_rsc_op id="stonith1-ssh:1_stop_0" operation="stop" crm-debug-origin="do_update_resource" transition-key="2:2:0:81471eca-6a9e-410b-b2b2-db41164a8f06" transition-magic="0:0;2:2:0:81471eca-6a9e-410b-b2b2-db41164a8f06" call-id="6" crm_feature_set="3.0" rc-code="0" op-status="0" interval="0" last-run="1213333559" last-rc-change="1213333559" exec-time="10" queue-time="0" op-digest="c96fa7fdbe97d2d472e37ec6c935a0d1" __crm_diff_marker__="added:top"/>
crmd[12126]: 2008/06/13_14:06:00 debug: start_global_timer: Starting abort timer: 300000ms
cib[12122]: 2008/06/13_14:06:00 debug: log_data_element: cib:diff: +           </lrm_resource>
crmd[12126]: 2008/06/13_14:06:00 debug: te_graph_trigger: Invoking the TE graph in state S_TRANSITION_ENGINE
cib[12122]: 2008/06/13_14:06:00 debug: log_data_element: cib:diff: +         </lrm_resources>
crmd[12126]: 2008/06/13_14:06:00 debug: run_graph: ====================================================
cib[12122]: 2008/06/13_14:06:00 debug: log_data_element: cib:diff: +       </lrm>
crmd[12126]: 2008/06/13_14:06:00 info: run_graph: Transition 2: (Complete=4, Pending=0, Fired=0, Skipped=0, Incomplete=0)
cib[12122]: 2008/06/13_14:06:00 debug: log_data_element: cib:diff: +     </node_state>
crmd[12126]: 2008/06/13_14:06:00 debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE
cib[12122]: 2008/06/13_14:06:00 debug: log_data_element: cib:diff: +   </status>
crmd[12126]: 2008/06/13_14:06:00 debug: stop_te_timer: Stopping global timer
cib[12122]: 2008/06/13_14:06:00 debug: log_data_element: cib:diff: + </cib>
crmd[12126]: 2008/06/13_14:06:00 info: notify_crmd: Transition 2 status: done - <null>
cib[12122]: 2008/06/13_14:06:00 debug: send_peer_reply: Sending update diff 0.8.16 -> 0.8.17
crmd[12126]: 2008/06/13_14:06:00 debug: register_fsa_input_adv: notify_crmd appended FSA input 32 (I_TE_SUCCESS) (cause=C_FSA_INTERNAL) without data
crmd[12126]: 2008/06/13_14:06:00 debug: s_crmd_fsa: Processing I_TE_SUCCESS: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
crmd[12126]: 2008/06/13_14:06:00 debug: do_fsa_action: actions:trace: 	// A_LOG   
crmd[12126]: 2008/06/13_14:06:00 info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
crmd[12126]: 2008/06/13_14:06:00 debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
crmd[12126]: 2008/06/13_14:06:00 debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
crmd[12126]: 2008/06/13_14:06:00 debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
