logd[29966]: 2008/09/12_16:12:15 info: logd started with /etc/logd.cf.
logd[29967]: 2008/09/12_16:12:15 info: G_main_add_SignalHandler: Added signal handler for signal 15
logd[29966]: 2008/09/12_16:12:15 info: G_main_add_SignalHandler: Added signal handler for signal 15
heartbeat[29987]: 2008/09/12_16:12:15 info: Enabling logging daemon 
heartbeat[29987]: 2008/09/12_16:12:15 info: logfile and debug file are those specified in logd config file (default /etc/logd.cf)
heartbeat[29987]: 2008/09/12_16:12:15 debug: add_option(udpport,694)
heartbeat[29987]: 2008/09/12_16:12:15 debug: add_option(keepalive,2)
heartbeat[29987]: 2008/09/12_16:12:15 debug: add_option(warntime,20)
heartbeat[29987]: 2008/09/12_16:12:15 debug: add_option(deadtime,24)
heartbeat[29987]: 2008/09/12_16:12:15 debug: add_option(initdead,48)
heartbeat[29987]: 2008/09/12_16:12:15 debug: add_option(bcast,eth1)
heartbeat[29987]: 2008/09/12_16:12:15 debug: add_option(bcast,eth2)
heartbeat[29987]: 2008/09/12_16:12:15 debug: add_option(node,node01)
heartbeat[29987]: 2008/09/12_16:12:15 debug: add_option(node,node02)
heartbeat[29987]: 2008/09/12_16:12:15 debug: uid=hacluster, gid=<null>
heartbeat[29987]: 2008/09/12_16:12:15 debug: uid=hacluster, gid=<null>
heartbeat[29987]: 2008/09/12_16:12:15 debug: uid=<null>, gid=haclient
heartbeat[29987]: 2008/09/12_16:12:15 debug: uid=root, gid=<null>
heartbeat[29987]: 2008/09/12_16:12:15 debug: uid=<null>, gid=haclient
heartbeat[29987]: 2008/09/12_16:12:15 debug: Beginning authentication parsing
heartbeat[29987]: 2008/09/12_16:12:15 debug: 16 max authentication methods
heartbeat[29987]: 2008/09/12_16:12:15 debug: Keyfile opened
heartbeat[29987]: 2008/09/12_16:12:15 debug: Keyfile perms OK
heartbeat[29987]: 2008/09/12_16:12:15 debug: 16 max authentication methods
heartbeat[29987]: 2008/09/12_16:12:15 debug: Found authentication method [sha1]
heartbeat[29987]: 2008/09/12_16:12:15 info: AUTH: i=1: key = 0x9019f20, auth=0x752228, authname=sha1
heartbeat[29987]: 2008/09/12_16:12:15 debug: Outbound signing method is 1
heartbeat[29987]: 2008/09/12_16:12:15 debug: Authentication parsing complete [1]
heartbeat[29987]: 2008/09/12_16:12:15 debug: add_option(cluster,linux-ha)
heartbeat[29987]: 2008/09/12_16:12:15 debug: add_option(hopfudge,1)
heartbeat[29987]: 2008/09/12_16:12:15 debug: add_option(baud,19200)
heartbeat[29987]: 2008/09/12_16:12:15 debug: add_option(auto_failback,legacy)
heartbeat[29987]: 2008/09/12_16:12:15 debug: add_option(hbgenmethod,file)
heartbeat[29987]: 2008/09/12_16:12:15 debug: add_option(realtime,true)
heartbeat[29987]: 2008/09/12_16:12:15 debug: add_option(msgfmt,classic)
heartbeat[29987]: 2008/09/12_16:12:15 debug: add_option(conn_logd_time,60)
heartbeat[29987]: 2008/09/12_16:12:15 debug: add_option(log_badpack,true)
heartbeat[29987]: 2008/09/12_16:12:15 debug: add_option(syslogmsgfmt,false)
heartbeat[29987]: 2008/09/12_16:12:15 debug: add_option(coredumps,true)
heartbeat[29987]: 2008/09/12_16:12:15 debug: add_option(autojoin,none)
heartbeat[29987]: 2008/09/12_16:12:15 debug: add_option(uuidfrom,file)
heartbeat[29987]: 2008/09/12_16:12:15 debug: add_option(compression,zlib)
heartbeat[29987]: 2008/09/12_16:12:15 debug: add_option(compression_threshold,2)
heartbeat[29987]: 2008/09/12_16:12:15 debug: add_option(traditional_compression,no)
heartbeat[29987]: 2008/09/12_16:12:15 debug: add_option(max_rexmit_delay,250)
heartbeat[29987]: 2008/09/12_16:12:15 debug: Setting max_rexmit_delay to 250 ms
heartbeat[29987]: 2008/09/12_16:12:15 debug: add_option(record_config_changes,on)
heartbeat[29987]: 2008/09/12_16:12:15 debug: add_option(record_pengine_inputs,on)
heartbeat[29987]: 2008/09/12_16:12:15 debug: add_option(enable_config_writes,on)
heartbeat[29987]: 2008/09/12_16:12:15 debug: add_option(memreserve,6500)
heartbeat[29987]: 2008/09/12_16:12:15 info: **************************
heartbeat[29987]: 2008/09/12_16:12:15 info: Configuration validated. Starting heartbeat 2.99.1
heartbeat[29987]: 2008/09/12_16:12:15 debug: HA configuration OK.  Heartbeat starting.
heartbeat[29988]: 2008/09/12_16:12:15 info: heartbeat: version 2.99.1
heartbeat[29988]: 2008/09/12_16:12:15 WARN: No Previous generation - starting at 1221203536
heartbeat[29988]: 2008/09/12_16:12:15 info: Heartbeat generation: 1221203536
heartbeat[29988]: 2008/09/12_16:12:15 info: No uuid found for current node - generating a new uuid.
heartbeat[29988]: 2008/09/12_16:12:15 debug: uuid is:780b5e1a-a0cd-45c0-a56a-dbb0938394ae
heartbeat[29988]: 2008/09/12_16:12:15 debug: FIFO process pid: 29991
heartbeat[29988]: 2008/09/12_16:12:15 debug: opening bcast eth1 (UDP/IP broadcast)
heartbeat[29988]: 2008/09/12_16:12:15 debug: glib: SO_BINDTODEVICE(r) set for device eth1
heartbeat[29988]: 2008/09/12_16:12:15 info: glib: UDP Broadcast heartbeat started on port 694 (694) interface eth1
heartbeat[29988]: 2008/09/12_16:12:15 debug: write process pid: 29992
heartbeat[29988]: 2008/09/12_16:12:15 debug: read child process pid: 29993
heartbeat[29988]: 2008/09/12_16:12:15 info: glib: UDP Broadcast heartbeat closed on port 694 interface eth1 - Status: 1
heartbeat[29988]: 2008/09/12_16:12:15 debug: make_io_childpair: CREATED childpair wchan socket 11
heartbeat[29988]: 2008/09/12_16:12:15 debug: make_io_childpair: CREATED childpair rchan socket 13
heartbeat[29988]: 2008/09/12_16:12:15 debug: opening bcast eth2 (UDP/IP broadcast)
heartbeat[29988]: 2008/09/12_16:12:15 debug: glib: SO_BINDTODEVICE(r) set for device eth2
heartbeat[29988]: 2008/09/12_16:12:15 info: glib: UDP Broadcast heartbeat started on port 694 (694) interface eth2
heartbeat[29988]: 2008/09/12_16:12:15 debug: write process pid: 29994
heartbeat[29988]: 2008/09/12_16:12:15 debug: read child process pid: 29995
heartbeat[29988]: 2008/09/12_16:12:15 info: glib: UDP Broadcast heartbeat closed on port 694 interface eth2 - Status: 1
heartbeat[29988]: 2008/09/12_16:12:15 debug: make_io_childpair: CREATED childpair wchan socket 12
heartbeat[29988]: 2008/09/12_16:12:15 debug: make_io_childpair: CREATED childpair rchan socket 15
heartbeat[29988]: 2008/09/12_16:12:15 info: G_main_add_TriggerHandler: Added signal manual handler
heartbeat[29988]: 2008/09/12_16:12:15 info: G_main_add_TriggerHandler: Added signal manual handler
heartbeat[29988]: 2008/09/12_16:12:15 info: G_main_add_SignalHandler: Added signal handler for signal 17
heartbeat[29988]: 2008/09/12_16:12:15 debug: Limiting CPU: 42 CPU seconds every 60000 milliseconds
heartbeat[29988]: 2008/09/12_16:12:15 debug: pid 29988 locked in memory.
heartbeat[29988]: 2008/09/12_16:12:15 debug: Waiting for child processes to start
heartbeat[29988]: 2008/09/12_16:12:15 info: Local status now set to: 'up'
heartbeat[29988]: 2008/09/12_16:12:15 debug: All your child process are belong to us
heartbeat[29988]: 2008/09/12_16:12:15 debug: Starting local status message @ 2000 ms intervals
heartbeat[29988]: 2008/09/12_16:12:15 debug: Forking temp process write_hostcachedata
heartbeat[29988]: 2008/09/12_16:12:15 info: Managed write_hostcachedata process 29996 exited with return code 0.
heartbeat[29991]: 2008/09/12_16:12:16 debug: pid 29991 locked in memory.
heartbeat[29991]: 2008/09/12_16:12:16 debug: Limiting CPU: 6 CPU seconds every 60000 milliseconds
heartbeat[29992]: 2008/09/12_16:12:16 debug: pid 29992 locked in memory.
heartbeat[29992]: 2008/09/12_16:12:16 debug: Limiting CPU: 24 CPU seconds every 60000 milliseconds
heartbeat[29993]: 2008/09/12_16:12:16 debug: pid 29993 locked in memory.
heartbeat[29993]: 2008/09/12_16:12:16 debug: Limiting CPU: 6 CPU seconds every 60000 milliseconds
heartbeat[29995]: 2008/09/12_16:12:16 debug: pid 29995 locked in memory.
heartbeat[29995]: 2008/09/12_16:12:16 debug: Limiting CPU: 6 CPU seconds every 60000 milliseconds
heartbeat[29994]: 2008/09/12_16:12:16 debug: pid 29994 locked in memory.
heartbeat[29994]: 2008/09/12_16:12:16 debug: Limiting CPU: 24 CPU seconds every 60000 milliseconds
heartbeat[29988]: 2008/09/12_16:12:16 info: Link node01:eth1 up.
heartbeat[29988]: 2008/09/12_16:12:16 debug: sending reqnodes msg to node node01
heartbeat[29988]: 2008/09/12_16:12:16 info: Status update for node node01: status up
heartbeat[29988]: 2008/09/12_16:12:16 debug: Status seqno: 2 msgtime: 1221203535
heartbeat[29988]: 2008/09/12_16:12:16 debug: Forking temp process write_hostcachedata
heartbeat[29988]: 2008/09/12_16:12:16 info: Link node02:eth1 up.
heartbeat[29988]: 2008/09/12_16:12:16 info: Link node02:eth2 up.
heartbeat[29988]: 2008/09/12_16:12:16 info: Managed write_hostcachedata process 30000 exited with return code 0.
heartbeat[29988]: 2008/09/12_16:12:17 debug: Get a reqnodes message from node01
heartbeat[29988]: 2008/09/12_16:12:17 debug: get_delnodelist: delnodelist= 
heartbeat[29988]: 2008/09/12_16:12:17 info: Link node01:eth2 up.
heartbeat[29988]: 2008/09/12_16:12:17 debug: Get a repnodes msg from node01
heartbeat[29988]: 2008/09/12_16:12:17 debug: nodelist received:node01 node02 
heartbeat[29988]: 2008/09/12_16:12:17 info: Comm_now_up(): updating status to active
heartbeat[29988]: 2008/09/12_16:12:17 info: Local status now set to: 'active'
heartbeat[29988]: 2008/09/12_16:12:17 info: Starting child client "/usr/lib/heartbeat/ccm" (90,90)
heartbeat[29988]: 2008/09/12_16:12:17 info: Starting child client "/usr/lib/heartbeat/cib" (90,90)
heartbeat[29988]: 2008/09/12_16:12:17 info: Starting child client "/usr/lib/heartbeat/lrmd -r" (0,0)
heartbeat[29988]: 2008/09/12_16:12:17 info: Starting child client "/usr/lib/heartbeat/stonithd" (0,0)
heartbeat[29988]: 2008/09/12_16:12:17 info: Starting child client "/usr/lib/heartbeat/attrd" (90,90)
heartbeat[29988]: 2008/09/12_16:12:17 info: Starting child client "/usr/lib/heartbeat/crmd" (90,90)
heartbeat[29988]: 2008/09/12_16:12:17 debug: Forking temp process write_hostcachedata
heartbeat[29988]: 2008/09/12_16:12:17 debug: Forking temp process write_delcachedata
heartbeat[30004]: 2008/09/12_16:12:17 info: Starting "/usr/lib/heartbeat/ccm" as uid 90  gid 90 (pid 30004)
ccm[30004]: 2008/09/12_16:12:17 debug: Signing in with Heartbeat
heartbeat[29988]: 2008/09/12_16:12:17 debug: APIregistration_dispatch() {
heartbeat[29988]: 2008/09/12_16:12:17 debug: process_registerevent() {
heartbeat[29988]: 2008/09/12_16:12:17 debug: client->gsource = 0x9024260
heartbeat[29988]: 2008/09/12_16:12:17 debug: }/*process_registerevent*/;
heartbeat[29988]: 2008/09/12_16:12:17 debug: }/*APIregistration_dispatch*/;
heartbeat[29988]: 2008/09/12_16:12:17 debug: Checking client authorization for client ccm (90:90)
heartbeat[29988]: 2008/09/12_16:12:17 debug: create_seq_snapshot_table:no missing packets found for node node01
heartbeat[29988]: 2008/09/12_16:12:17 debug: create_seq_snapshot_table:no missing packets found for node node02
heartbeat[29988]: 2008/09/12_16:12:17 debug: Signing on API client 30004 (ccm)
heartbeat[30005]: 2008/09/12_16:12:17 info: Starting "/usr/lib/heartbeat/cib" as uid 90  gid 90 (pid 30005)
heartbeat[30006]: 2008/09/12_16:12:17 info: Starting "/usr/lib/heartbeat/lrmd -r" as uid 0  gid 0 (pid 30006)
heartbeat[30007]: 2008/09/12_16:12:17 info: Starting "/usr/lib/heartbeat/stonithd" as uid 0  gid 0 (pid 30007)
heartbeat[30008]: 2008/09/12_16:12:17 info: Starting "/usr/lib/heartbeat/attrd" as uid 90  gid 90 (pid 30008)
attrd[30008]: 2008/09/12_16:12:17 info: G_main_add_SignalHandler: Added signal handler for signal 15
attrd[30008]: 2008/09/12_16:12:17 info: main: Starting up....
attrd[30008]: 2008/09/12_16:12:17 debug: register_heartbeat_conn: Signing in with Heartbeat
heartbeat[29988]: 2008/09/12_16:12:17 debug: APIregistration_dispatch() {
heartbeat[29988]: 2008/09/12_16:12:17 debug: process_registerevent() {
heartbeat[29988]: 2008/09/12_16:12:17 debug: client->gsource = 0x9025ad8
heartbeat[29988]: 2008/09/12_16:12:17 debug: }/*process_registerevent*/;
heartbeat[29988]: 2008/09/12_16:12:17 debug: }/*APIregistration_dispatch*/;
heartbeat[29988]: 2008/09/12_16:12:17 debug: Checking client authorization for client attrd (90:90)
heartbeat[29988]: 2008/09/12_16:12:17 debug: create_seq_snapshot_table:no missing packets found for node node01
heartbeat[29988]: 2008/09/12_16:12:17 debug: create_seq_snapshot_table:no missing packets found for node node02
heartbeat[29988]: 2008/09/12_16:12:17 debug: Signing on API client 30008 (attrd)
heartbeat[30009]: 2008/09/12_16:12:17 info: Starting "/usr/lib/heartbeat/crmd" as uid 90  gid 90 (pid 30009)
crmd[30009]: 2008/09/12_16:12:17 info: main: CRM Hg Version: Unknown

crmd[30009]: 2008/09/12_16:12:17 info: crmd_init: Starting crmd
crmd[30009]: 2008/09/12_16:12:17 debug: s_crmd_fsa: Processing I_STARTUP: [ state=S_STARTING cause=C_STARTUP origin=crmd_init ]
crmd[30009]: 2008/09/12_16:12:17 debug: do_fsa_action: actions:trace: 	// A_LOG   
crmd[30009]: 2008/09/12_16:12:17 debug: do_fsa_action: actions:trace: 	// A_STARTUP
crmd[30009]: 2008/09/12_16:12:17 debug: do_startup: Registering Signal Handlers
crmd[30009]: 2008/09/12_16:12:17 info: G_main_add_SignalHandler: Added signal handler for signal 15
crmd[30009]: 2008/09/12_16:12:17 info: G_main_add_TriggerHandler: Added signal manual handler
crmd[30009]: 2008/09/12_16:12:17 debug: do_startup: Creating CIB and LRM objects
crmd[30009]: 2008/09/12_16:12:17 info: G_main_add_SignalHandler: Added signal handler for signal 17
crmd[30009]: 2008/09/12_16:12:17 debug: do_fsa_action: actions:trace: 	// A_CIB_START
crmd[30009]: 2008/09/12_16:12:17 debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/heartbeat/crm/cib_rw
crmd[30009]: 2008/09/12_16:12:17 debug: init_client_ipc_comms_nodispatch: Could not init comms on: /var/run/heartbeat/crm/cib_rw
crmd[30009]: 2008/09/12_16:12:17 debug: cib_native_signon: Connection to command channel failed
crmd[30009]: 2008/09/12_16:12:17 debug: cib_native_signon: Connection to CIB failed: connection failed
crmd[30009]: 2008/09/12_16:12:17 debug: cib_native_signoff: Signing out of the CIB Service
heartbeat[29988]: 2008/09/12_16:12:17 info: Status update for node node01: status active
heartbeat[29988]: 2008/09/12_16:12:17 debug: Status seqno: 7 msgtime: 1221203537
heartbeat[29988]: 2008/09/12_16:12:17 info: Managed write_delcachedata process 30011 exited with return code 0.
ccm[30004]: 2008/09/12_16:12:17 info: Hostname: node02
attrd[30008]: 2008/09/12_16:12:17 info: register_heartbeat_conn: Hostname: node02
attrd[30008]: 2008/09/12_16:12:17 info: register_heartbeat_conn: UUID: 780b5e1a-a0cd-45c0-a56a-dbb0938394ae
attrd[30008]: 2008/09/12_16:12:17 debug: main: CIB signon attempt 0
attrd[30008]: 2008/09/12_16:12:17 debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/heartbeat/crm/cib_rw
attrd[30008]: 2008/09/12_16:12:17 debug: init_client_ipc_comms_nodispatch: Could not init comms on: /var/run/heartbeat/crm/cib_rw
attrd[30008]: 2008/09/12_16:12:17 debug: cib_native_signon: Connection to command channel failed
attrd[30008]: 2008/09/12_16:12:17 debug: cib_native_signon: Connection to CIB failed: connection failed
attrd[30008]: 2008/09/12_16:12:17 debug: cib_native_signoff: Signing out of the CIB Service
cib[30005]: 2008/09/12_16:12:17 info: G_main_add_SignalHandler: Added signal handler for signal 15
cib[30005]: 2008/09/12_16:12:17 info: G_main_add_TriggerHandler: Added signal manual handler
cib[30005]: 2008/09/12_16:12:17 info: G_main_add_SignalHandler: Added signal handler for signal 17
cib[30005]: 2008/09/12_16:12:17 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
cib[30005]: 2008/09/12_16:12:17 WARN: validate_cib_digest: No on-disk digest present
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: Dumping cib
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk] <cib epoch="1" num_updates="1" admin_epoch="0" >
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: Dumping configuration
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]   <configuration >
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: Dumping crm_config
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]     <crm_config >
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: Dumping cluster_property_set
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]       <cluster_property_set id="cib-bootstrap-options" >
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: Dumping attributes
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]         <attributes >
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: Dumping nvpair
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]           <nvpair id="cib-bootstrap-options-no-quorum-policy" name="no-quorum-policy" value="ignore" />
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: Dumping nvpair
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]           <nvpair id="cib-bootstrap-options-stonith-enabled" name="stonith-enabled" value="true" />
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: Dumping nvpair
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]           <nvpair id="cib-bootstrap-options-default-resource-stickiness" name="default-resource-stickiness" value="INFINITY" />
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: Dumping nvpair
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]           <nvpair id="cib-bootstrap-options-default-resource-failure-stickiness" name="default-resource-failure-stickiness" value="-INFINITY" />
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: Dumping nvpair
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]           <nvpair id="cib-bootstrap-options-default-action-timeout" name="default-action-timeout" value="120s" />
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]         </attributes>
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]       </cluster_property_set>
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]     </crm_config>
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: Dumping nodes
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]     <nodes />
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: Dumping resources
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]     <resources >
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: Dumping primitive
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]       <primitive id="dummy" class="ocf" type="Dummy" provider="heartbeat" >
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: Dumping operations
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]         <operations >
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: Dumping op
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]           <op id="dummy:start" name="start" timeout="30" on_fail="restart" />
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: Dumping op
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]           <op id="dummy:monitor" name="monitor" timeout="30" on_fail="fence" interval="10" />
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: Dumping op
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]           <op id="dummy:stop" name="stop" timeout="30" on_fail="fence" />
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]         </operations>
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]       </primitive>
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: Dumping clone
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]       <clone id="clnFencing" globally_unique="false" >
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: Dumping instance_attributes
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]         <instance_attributes id="clnFencing:attr" >
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: Dumping attributes
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]           <attributes >
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: Dumping nvpair
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]             <nvpair id="clnFencing:attr:clone_max" name="clone_max" value="2" />
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: Dumping nvpair
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]             <nvpair id="clnFencing:attr:clone_node_max" name="clone_node_max" value="1" />
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]           </attributes>
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]         </instance_attributes>
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: Dumping primitive
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]         <primitive id="prmFencing" class="stonith" type="external/sshTEST" provider="heartbeat" >
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: Dumping operations
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]           <operations >
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: Dumping op
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]             <op id="prmFencing:op:monitor" name="monitor" interval="5s" timeout="20s" prereq="nothing" />
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: Dumping op
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]             <op id="prmFencing:op:start" name="start" timeout="20s" prereq="nothing" />
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]           </operations>
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: Dumping instance_attributes
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]           <instance_attributes id="prmFencing:attr" >
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: Dumping attributes
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]             <attributes >
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: Dumping nvpair
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]               <nvpair id="prmFencing:attr:hostlist" name="hostlist" value="node01,node02" />
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: Dumping nvpair
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]               <nvpair id="prmFencing:attr:extension" name="extension" value="-mente" />
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]             </attributes>
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]           </instance_attributes>
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]         </primitive>
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]       </clone>
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]     </resources>
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: Dumping constraints
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]     <constraints >
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: Dumping rsc_location
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]       <rsc_location rsc="dummy" id="dummy:location1" >
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: Dumping rule
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]         <rule id="dummy:rule1" score="200" >
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: Dumping expression
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]           <expression id="dummy:exp1" attribute="#uname" operation="eq" value="node01" />
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]         </rule>
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: Dumping rule
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]         <rule id="dummy:rule2" score="100" >
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: Dumping expression
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]           <expression id="dummy:exp2" attribute="#uname" operation="eq" value="node02" />
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]         </rule>
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]       </rsc_location>
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]     </constraints>
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]   </configuration>
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: Dumping status
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk]   <status />
cib[30005]: 2008/09/12_16:12:17 debug: log_data_element: readCibXmlFile: [on-disk] </cib>
cib[30005]: 2008/09/12_16:12:17 debug: update_validation: Testing 'none' validation
cib[30005]: 2008/09/12_16:12:17 debug: update_validation: Testing 'pacemaker-0.6' validation
cib[30005]: 2008/09/12_16:12:17 debug: update_validation: Testing 'transitional-0.6' validation
cib[30005]: 2008/09/12_16:12:17 debug: update_validation: Testing 'pacemaker-0.7' validation
cib[30005]: 2008/09/12_16:12:17 notice: update_validation: Upgraded from <none> to transitional-0.6 validation
cib[30005]: 2008/09/12_16:12:17 notice: readCibXmlFile: Enabling transitional-0.6 validation on the existing (sane) configuration
cib[30005]: 2008/09/12_16:12:17 debug: activateCibXml: Triggering CIB write for start op
cib[30005]: 2008/09/12_16:12:17 info: startCib: CIB Initialization completed successfully
cib[30005]: 2008/09/12_16:12:17 debug: register_heartbeat_conn: Signing in with Heartbeat
heartbeat[29988]: 2008/09/12_16:12:17 debug: APIregistration_dispatch() {
heartbeat[29988]: 2008/09/12_16:12:17 debug: process_registerevent() {
heartbeat[29988]: 2008/09/12_16:12:17 debug: client->gsource = 0x9025b58
heartbeat[29988]: 2008/09/12_16:12:17 debug: }/*process_registerevent*/;
heartbeat[29988]: 2008/09/12_16:12:17 debug: }/*APIregistration_dispatch*/;
heartbeat[29988]: 2008/09/12_16:12:17 debug: Checking client authorization for client cib (90:90)
heartbeat[29988]: 2008/09/12_16:12:17 debug: create_seq_snapshot_table:no missing packets found for node node01
heartbeat[29988]: 2008/09/12_16:12:17 debug: create_seq_snapshot_table:no missing packets found for node node02
heartbeat[29988]: 2008/09/12_16:12:17 debug: Signing on API client 30005 (cib)
lrmd[30006]: 2008/09/12_16:12:17 info: G_main_add_SignalHandler: Added signal handler for signal 15
lrmd[30006]: 2008/09/12_16:12:17 debug: LRM debug level set to 1
lrmd[30006]: 2008/09/12_16:12:17 info: G_main_add_SignalHandler: Added signal handler for signal 17
lrmd[30006]: 2008/09/12_16:12:17 debug: Enabling coredumps
lrmd[30006]: 2008/09/12_16:12:17 info: G_main_add_SignalHandler: Added signal handler for signal 10
lrmd[30006]: 2008/09/12_16:12:17 info: G_main_add_SignalHandler: Added signal handler for signal 12
lrmd[30006]: 2008/09/12_16:12:17 debug: main: run the loop...
lrmd[30006]: 2008/09/12_16:12:17 info: Started.
stonithd[30007]: 2008/09/12_16:12:17 info: G_main_add_SignalHandler: Added signal handler for signal 10
stonithd[30007]: 2008/09/12_16:12:17 info: G_main_add_SignalHandler: Added signal handler for signal 12
stonithd[30007]: 2008/09/12_16:12:17 debug: pid 30007 locked in memory.
heartbeat[29988]: 2008/09/12_16:12:17 debug: APIregistration_dispatch() {
heartbeat[29988]: 2008/09/12_16:12:17 debug: process_registerevent() {
heartbeat[29988]: 2008/09/12_16:12:17 debug: client->gsource = 0x90272b0
heartbeat[29988]: 2008/09/12_16:12:17 debug: }/*process_registerevent*/;
heartbeat[29988]: 2008/09/12_16:12:17 debug: }/*APIregistration_dispatch*/;
heartbeat[29988]: 2008/09/12_16:12:17 debug: Checking client authorization for client stonithd (0:0)
heartbeat[29988]: 2008/09/12_16:12:17 debug: create_seq_snapshot_table:no missing packets found for node node01
heartbeat[29988]: 2008/09/12_16:12:17 debug: create_seq_snapshot_table:no missing packets found for node node02
heartbeat[29988]: 2008/09/12_16:12:17 debug: Signing on API client 30007 (stonithd)
cib[30005]: 2008/09/12_16:12:17 info: register_heartbeat_conn: Hostname: node02
cib[30005]: 2008/09/12_16:12:17 info: register_heartbeat_conn: UUID: 780b5e1a-a0cd-45c0-a56a-dbb0938394ae
cib[30005]: 2008/09/12_16:12:17 info: ccm_connect: Registering with CCM...
cib[30005]: 2008/09/12_16:12:17 WARN: ccm_connect: CCM Activation failed
cib[30005]: 2008/09/12_16:12:17 WARN: ccm_connect: CCM Connection failed 1 times (30 max)
heartbeat[29988]: 2008/09/12_16:12:17 info: Managed write_hostcachedata process 30010 exited with return code 0.
stonithd[30007]: 2008/09/12_16:12:17 info: register_heartbeat_conn: Hostname: node02
stonithd[30007]: 2008/09/12_16:12:17 info: register_heartbeat_conn: UUID: 780b5e1a-a0cd-45c0-a56a-dbb0938394ae
stonithd[30007]: 2008/09/12_16:12:17 debug: Setting message filter mode
stonithd[30007]: 2008/09/12_16:12:17 debug: apichan=0x83e0dc0
stonithd[30007]: 2008/09/12_16:12:17 debug: callback_chan=0x83e0cd0
stonithd[30007]: 2008/09/12_16:12:17 notice: /usr/lib/heartbeat/stonithd start up successfully.
stonithd[30007]: 2008/09/12_16:12:17 info: G_main_add_SignalHandler: Added signal handler for signal 17
crmd[30009]: 2008/09/12_16:12:18 debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/heartbeat/crm/cib_rw
crmd[30009]: 2008/09/12_16:12:18 debug: init_client_ipc_comms_nodispatch: Could not init comms on: /var/run/heartbeat/crm/cib_rw
crmd[30009]: 2008/09/12_16:12:18 debug: cib_native_signon: Connection to command channel failed
crmd[30009]: 2008/09/12_16:12:18 debug: cib_native_signon: Connection to CIB failed: connection failed
crmd[30009]: 2008/09/12_16:12:18 debug: cib_native_signoff: Signing out of the CIB Service
crmd[30009]: 2008/09/12_16:12:18 debug: do_cib_control: Could not connect to the CIB service
crmd[30009]: 2008/09/12_16:12:18 WARN: do_cib_control: Couldn't complete CIB registration 1 times... pause and retry
crmd[30009]: 2008/09/12_16:12:18 debug: crm_timer_start: Started Wait Timer (I_NULL:2000ms), src=5
crmd[30009]: 2008/09/12_16:12:18 debug: register_fsa_input_adv: Stalling the FSA pending further input: cause=C_FSA_INTERNAL
crmd[30009]: 2008/09/12_16:12:18 debug: s_crmd_fsa: Exiting the FSA: queue=0, fsa_actions=0x180021000000006, stalled=true
crmd[30009]: 2008/09/12_16:12:18 info: crmd_init: Starting crmd's mainloop
ccm[30004]: 2008/09/12_16:12:19 debug: node state CCM_STATE_NONE -> CCM_STATE_NONE
ccm[30004]: 2008/09/12_16:12:19 debug: node state CCM_STATE_NONE -> CCM_STATE_NONE
ccm[30004]: 2008/09/12_16:12:19 info: G_main_add_SignalHandler: Added signal handler for signal 15
crmd[30009]: 2008/09/12_16:12:20 info: crm_timer_popped: Wait Timer (I_NULL) just popped!
crmd[30009]: 2008/09/12_16:12:20 debug: do_fsa_action: actions:trace: 	// A_CIB_START
crmd[30009]: 2008/09/12_16:12:20 debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/heartbeat/crm/cib_rw
crmd[30009]: 2008/09/12_16:12:20 debug: init_client_ipc_comms_nodispatch: Could not init comms on: /var/run/heartbeat/crm/cib_rw
crmd[30009]: 2008/09/12_16:12:20 debug: cib_native_signon: Connection to command channel failed
crmd[30009]: 2008/09/12_16:12:20 debug: cib_native_signon: Connection to CIB failed: connection failed
crmd[30009]: 2008/09/12_16:12:20 debug: cib_native_signoff: Signing out of the CIB Service
cib[30005]: 2008/09/12_16:12:20 info: ccm_connect: Registering with CCM...
cib[30005]: 2008/09/12_16:12:20 debug: ccm_connect: CCM Activation passed... all set to go!
cib[30005]: 2008/09/12_16:12:20 info: cib_init: Requesting the list of configured nodes
cib[30005]: 2008/09/12_16:12:20 debug: Delaying cstatus request for 177 ms
ccm[30004]: 2008/09/12_16:12:20 debug: recv msg hbapi-clstat from node02, status:join
cib[30005]: 2008/09/12_16:12:20 info: cib_init: Starting cib mainloop
cib[30005]: 2008/09/12_16:12:20 info: cib_client_status_callback: Status update: Client node02/cib now has status [join]
cib[30005]: 2008/09/12_16:12:20 debug: crm_new_peer: Creating entry for node node02/0
cib[30005]: 2008/09/12_16:12:20 info: crm_update_peer: Node 0 is now known as node02
cib[30005]: 2008/09/12_16:12:20 info: crm_update_peer: New Node node02: id=0 state=unknown addr=(null) votes=-1 born=0 seen=0 proc=00000000000000000000000000000000
cib[30005]: 2008/09/12_16:12:20 info: crm_update_peer_proc: node02.cib is now online
cib[30005]: 2008/09/12_16:12:20 info: cib_client_status_callback: Status update: Client node01/cib now has status [join]
cib[30005]: 2008/09/12_16:12:21 debug: crm_new_peer: Creating entry for node node01/0
cib[30005]: 2008/09/12_16:12:21 info: crm_update_peer: Node 0 is now known as node01
cib[30005]: 2008/09/12_16:12:21 info: crm_update_peer: New Node node01: id=0 state=unknown addr=(null) votes=-1 born=0 seen=0 proc=00000000000000000000000000000000
cib[30005]: 2008/09/12_16:12:21 info: crm_update_peer_proc: node01.cib is now online
cib[30005]: 2008/09/12_16:12:21 info: cib_client_status_callback: Status update: Client node02/cib now has status [online]
cib[30005]: 2008/09/12_16:12:21 debug: Forking temp process write_cib_contents
cib[30021]: 2008/09/12_16:12:21 WARN: validate_cib_digest: No on-disk digest present
cib[30021]: 2008/09/12_16:12:21 info: write_cib_contents: Wrote version 0.1.1 of the CIB to disk (digest: 3bd7a17b4597eabb310d1a51b2c972d1)
cib[30021]: 2008/09/12_16:12:21 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
cib[30005]: 2008/09/12_16:12:21 info: Managed write_cib_contents process 30021 exited with return code 0.
crmd[30009]: 2008/09/12_16:12:21 debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/heartbeat/crm/cib_rw
crmd[30009]: 2008/09/12_16:12:21 debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/heartbeat/crm/cib_callback
crmd[30009]: 2008/09/12_16:12:21 debug: cib_native_signon: Connection to CIB successful
cib[30005]: 2008/09/12_16:12:21 info: cib_common_callback_worker: Setting cib_refresh_notify callbacks for 30009 (45d81afc-4757-4582-a039-8e89dcefed4f): on
crmd[30009]: 2008/09/12_16:12:21 info: do_cib_control: CIB connection established
crmd[30009]: 2008/09/12_16:12:21 debug: do_fsa_action: actions:trace: 	// A_HA_CONNECT
crmd[30009]: 2008/09/12_16:12:21 debug: register_heartbeat_conn: Signing in with Heartbeat
heartbeat[29988]: 2008/09/12_16:12:21 WARN: 1 lost packet(s) for [node01] [15:17]
heartbeat[29988]: 2008/09/12_16:12:21 info: No pkts missing from node01!
heartbeat[29988]: 2008/09/12_16:12:21 debug: APIregistration_dispatch() {
heartbeat[29988]: 2008/09/12_16:12:21 debug: process_registerevent() {
heartbeat[29988]: 2008/09/12_16:12:21 debug: client->gsource = 0x902b558
heartbeat[29988]: 2008/09/12_16:12:21 debug: }/*process_registerevent*/;
heartbeat[29988]: 2008/09/12_16:12:21 debug: }/*APIregistration_dispatch*/;
cib[30005]: 2008/09/12_16:12:21 info: cib_client_status_callback: Status update: Client node01/cib now has status [online]
heartbeat[29988]: 2008/09/12_16:12:21 debug: Checking client authorization for client crmd (90:90)
heartbeat[29988]: 2008/09/12_16:12:21 debug: create_seq_snapshot_table:no missing packets found for node node01
heartbeat[29988]: 2008/09/12_16:12:21 debug: create_seq_snapshot_table:no missing packets found for node node02
heartbeat[29988]: 2008/09/12_16:12:21 debug: Signing on API client 30009 (crmd)
crmd[30009]: 2008/09/12_16:12:21 info: register_heartbeat_conn: Hostname: node02
crmd[30009]: 2008/09/12_16:12:21 info: register_heartbeat_conn: UUID: 780b5e1a-a0cd-45c0-a56a-dbb0938394ae
ccm[30004]: 2008/09/12_16:12:21 debug: recv msg status from node01, status:active
crmd[30009]: 2008/09/12_16:12:21 debug: Delaying cstatus request for 37 ms
ccm[30004]: 2008/09/12_16:12:21 debug: status of node node01: active -> active
crmd[30009]: 2008/09/12_16:12:21 info: do_ha_control: Connected to Heartbeat
crmd[30009]: 2008/09/12_16:12:21 debug: do_fsa_action: actions:trace: 	// A_READCONFIG
crmd[30009]: 2008/09/12_16:12:21 debug: do_fsa_action: actions:trace: 	// A_LRM_CONNECT
crmd[30009]: 2008/09/12_16:12:21 debug: do_lrm_control: Connecting to the LRM
lrmd[30006]: 2008/09/12_16:12:21 debug: on_msg_register:client crmd [30009] registered
crmd[30009]: 2008/09/12_16:12:21 debug: do_lrm_control: LRM connection established
crmd[30009]: 2008/09/12_16:12:21 debug: do_fsa_action: actions:trace: 	// A_CCM_CONNECT
crmd[30009]: 2008/09/12_16:12:21 info: do_ccm_control: CCM connection established... waiting for first callback
crmd[30009]: 2008/09/12_16:12:21 debug: do_fsa_action: actions:trace: 	// A_STARTED
crmd[30009]: 2008/09/12_16:12:21 info: do_started: Delaying start, CCM (0000000000100000) not connected
crmd[30009]: 2008/09/12_16:12:21 debug: register_fsa_input_adv: Stalling the FSA pending further input: cause=C_FSA_INTERNAL
crmd[30009]: 2008/09/12_16:12:21 debug: s_crmd_fsa: Exiting the FSA: queue=0, fsa_actions=0x2, stalled=true
crmd[30009]: 2008/09/12_16:12:21 debug: fsa_dump_inputs: Added input: 0000000000000100 (R_CIB_CONNECTED)
crmd[30009]: 2008/09/12_16:12:21 debug: fsa_dump_inputs: Added input: 0000000000000800 (R_LRM_CONNECTED)
crmd[30009]: 2008/09/12_16:12:21 debug: config_query_callback: Call 3 : Parsing CIB options
crmd[30009]: 2008/09/12_16:12:21 notice: crmd_client_status_callback: Status update: Client node02/crmd now has status [online] (DC=false)
crmd[30009]: 2008/09/12_16:12:22 debug: crm_new_peer: Creating entry for node node02/0
crmd[30009]: 2008/09/12_16:12:22 info: crm_update_peer: Node 0 is now known as node02
crmd[30009]: 2008/09/12_16:12:22 info: crm_update_peer: New Node node02: id=0 state=unknown addr=(null) votes=-1 born=0 seen=0 proc=00000000000000000000000000000000
crmd[30009]: 2008/09/12_16:12:22 info: crm_update_peer_proc: node02.crmd is now online
crmd[30009]: 2008/09/12_16:12:22 info: crmd_client_status_callback: Not the DC
crmd[30009]: 2008/09/12_16:12:22 notice: crmd_client_status_callback: Status update: Client node01/crmd now has status [online] (DC=false)
attrd[30008]: 2008/09/12_16:12:22 debug: main: CIB signon attempt 1
attrd[30008]: 2008/09/12_16:12:22 debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/heartbeat/crm/cib_rw
attrd[30008]: 2008/09/12_16:12:22 debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/heartbeat/crm/cib_callback
attrd[30008]: 2008/09/12_16:12:22 debug: cib_native_signon: Connection to CIB successful
ccm[30004]: 2008/09/12_16:12:22 debug: recv msg hbapi-clstat from node01, status:join
crmd[30009]: 2008/09/12_16:12:22 debug: crm_new_peer: Creating entry for node node01/0
ccm[30004]: 2008/09/12_16:12:22 debug: recv msg CCM_TYPE_PROTOVERSION from node01, status:[null ptr]
crmd[30009]: 2008/09/12_16:12:22 info: crm_update_peer: Node 0 is now known as node01
ccm[30004]: 2008/09/12_16:12:22 debug: send msg CCM_TYPE_PROTOVERSION to cluster, status:[null]
crmd[30009]: 2008/09/12_16:12:22 info: crm_update_peer: New Node node01: id=0 state=unknown addr=(null) votes=-1 born=0 seen=0 proc=00000000000000000000000000000000
ccm[30004]: 2008/09/12_16:12:22 debug: node state CCM_STATE_NONE -> CCM_STATE_VERSION_REQUEST
crmd[30009]: 2008/09/12_16:12:22 info: crm_update_peer_proc: node01.crmd is now online
crmd[30009]: 2008/09/12_16:12:22 info: crmd_client_status_callback: Not the DC
crmd[30009]: 2008/09/12_16:12:22 notice: crmd_client_status_callback: Status update: Client node02/crmd now has status [online] (DC=false)
heartbeat[29988]: 2008/09/12_16:12:23 WARN: 1 lost packet(s) for [node01] [20:22]
heartbeat[29988]: 2008/09/12_16:12:23 info: No pkts missing from node01!
ccm[30004]: 2008/09/12_16:12:23 debug: recv msg CCM_TYPE_PROTOVERSION from node02, status:[null ptr]
crmd[30009]: 2008/09/12_16:12:23 info: crmd_client_status_callback: Not the DC
ccm[30004]: 2008/09/12_16:12:23 debug: No quorum selected,using default quorum plugin(majority:twonodes)
crmd[30009]: 2008/09/12_16:12:23 notice: crmd_client_status_callback: Status update: Client node01/crmd now has status [online] (DC=false)
ccm[30004]: 2008/09/12_16:12:23 debug: quorum plugin: majority
ccm[30004]: 2008/09/12_16:12:23 debug: cluster:linux-ha, member_count=1, member_quorum_votes=100
ccm[30004]: 2008/09/12_16:12:23 debug: total_node_count=2, total_quorum_votes=200
ccm[30004]: 2008/09/12_16:12:23 debug: quorum plugin: twonodes
ccm[30004]: 2008/09/12_16:12:23 debug: cluster:linux-ha, member_count=1, member_quorum_votes=100
ccm[30004]: 2008/09/12_16:12:23 debug: total_node_count=2, total_quorum_votes=200
ccm[30004]: 2008/09/12_16:12:23 info: Break tie for 2 nodes cluster
ccm[30004]: 2008/09/12_16:12:23 debug: node state CCM_STATE_VERSION_REQUEST -> CCM_STATE_JOINED
ccm[30004]: 2008/09/12_16:12:23 debug: dump current membership 0xb7ee3008
ccm[30004]: 2008/09/12_16:12:23 debug: 	leader=node02
ccm[30004]: 2008/09/12_16:12:23 debug: 	transition=1
ccm[30004]: 2008/09/12_16:12:23 debug: 	status=CCM_STATE_JOINED
ccm[30004]: 2008/09/12_16:12:23 debug: 	has_quorum=1
ccm[30004]: 2008/09/12_16:12:23 debug: 	nodename=node02 bornon=1
ccm[30004]: 2008/09/12_16:12:23 debug: quorum is 1
ccm[30004]: 2008/09/12_16:12:23 debug: delivering new membership to 2 clients: 
ccm[30004]: 2008/09/12_16:12:23 debug: client: pid =30009
ccm[30004]: 2008/09/12_16:12:23 debug: client: pid =30005
ccm[30004]: 2008/09/12_16:12:23 debug: send msg CCM_TYPE_PROTOVERSION_RESP to node01, status:[null]
cib[30005]: 2008/09/12_16:12:23 info: mem_handle_event: Got an event OC_EV_MS_NEW_MEMBERSHIP from ccm
cib[30005]: 2008/09/12_16:12:23 info: mem_handle_event: instance=1, nodes=1, new=1, lost=0, n_idx=0, new_idx=0, old_idx=3
cib[30005]: 2008/09/12_16:12:23 info: cib_ccm_msg_callback: Processing CCM event=NEW MEMBERSHIP (id=1)
cib[30005]: 2008/09/12_16:12:23 info: crm_update_peer: Node node02 now has id: 1
cib[30005]: 2008/09/12_16:12:23 info: crm_update_peer: Node node02: id=1 (new) state=member (new) addr=(null) votes=-1 born=1 seen=1 proc=00000000000000000000000000000100
cib[30005]: 2008/09/12_16:12:23 info: crm_update_peer_proc: node02.ais is now online
crmd[30009]: 2008/09/12_16:12:23 info: crmd_client_status_callback: Not the DC
crmd[30009]: 2008/09/12_16:12:23 debug: do_fsa_action: actions:trace: 	// A_STARTED
crmd[30009]: 2008/09/12_16:12:23 info: do_started: Delaying start, CCM (0000000000100000) not connected
crmd[30009]: 2008/09/12_16:12:23 debug: register_fsa_input_adv: Stalling the FSA pending further input: cause=C_FSA_INTERNAL
crmd[30009]: 2008/09/12_16:12:23 debug: s_crmd_fsa: Exiting the FSA: queue=0, fsa_actions=0x2, stalled=true
crmd[30009]: 2008/09/12_16:12:23 info: mem_handle_event: Got an event OC_EV_MS_NEW_MEMBERSHIP from ccm
crmd[30009]: 2008/09/12_16:12:23 info: mem_handle_event: instance=1, nodes=1, new=1, lost=0, n_idx=0, new_idx=0, old_idx=3
crmd[30009]: 2008/09/12_16:12:23 info: crmd_ccm_msg_callback: Quorum (re)attained after event=NEW MEMBERSHIP (id=1)
crmd[30009]: 2008/09/12_16:12:23 info: crm_update_quorum: Updating quorum status to true (call=4)
crmd[30009]: 2008/09/12_16:12:23 info: ccm_event_detail: NEW MEMBERSHIP: trans=1, nodes=1, new=1, lost=0 n_idx=0, new_idx=0, old_idx=3
crmd[30009]: 2008/09/12_16:12:23 info: ccm_event_detail: 	CURRENT: node02 [nodeid=1, born=1]
crmd[30009]: 2008/09/12_16:12:23 info: ccm_event_detail: 	NEW:     node02 [nodeid=1, born=1]
crmd[30009]: 2008/09/12_16:12:23 info: crm_update_peer: Node node02 now has id: 1
crmd[30009]: 2008/09/12_16:12:23 info: crm_update_peer: Node node02: id=1 (new) state=member (new) addr=(null) votes=-1 born=1 seen=1 proc=00000000000000000000000000000200
crmd[30009]: 2008/09/12_16:12:23 info: crm_update_peer_proc: node02.ais is now online
crmd[30009]: 2008/09/12_16:12:23 debug: post_cache_update: Updated cache after membership event 1.
crmd[30009]: 2008/09/12_16:12:23 debug: do_fsa_action: actions:trace: 	// A_STARTED
crmd[30009]: 2008/09/12_16:12:23 debug: do_started: Init server comms
crmd[30009]: 2008/09/12_16:12:23 info: do_started: The local CRM is operational
crmd[30009]: 2008/09/12_16:12:23 debug: s_crmd_fsa: Processing I_PENDING: [ state=S_STARTING cause=C_FSA_INTERNAL origin=do_started ]
crmd[30009]: 2008/09/12_16:12:23 debug: do_fsa_action: actions:trace: 	// A_LOG   
crmd[30009]: 2008/09/12_16:12:23 info: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
crmd[30009]: 2008/09/12_16:12:23 debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
crmd[30009]: 2008/09/12_16:12:23 debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
crmd[30009]: 2008/09/12_16:12:23 debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_QUERY
ccm[30004]: 2008/09/12_16:12:23 WARN: ccm_state_joined: received message with unknown cookie, just dropping
cib[30005]: 2008/09/12_16:12:23 debug: activateCibXml: Triggering CIB write for cib_modify op
ccm[30004]: 2008/09/12_16:12:23 debug: dump current membership 0xb7ee3008
cib[30005]: 2008/09/12_16:12:23 info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/b37880f5-7c6f-410b-ab76-2ba745c8b625/4): ok (rc=0)
ccm[30004]: 2008/09/12_16:12:23 debug: 	leader=node02
cib[30005]: 2008/09/12_16:12:23 info: cib_process_request: Operation complete: op cib_slave for section 'all' (origin=local/b37880f5-7c6f-410b-ab76-2ba745c8b625/5): ok (rc=0)
ccm[30004]: 2008/09/12_16:12:23 debug: 	transition=1
cib[30005]: 2008/09/12_16:12:23 debug: Forking temp process write_cib_contents
ccm[30004]: 2008/09/12_16:12:23 debug: 	status=CCM_STATE_JOINED
ccm[30004]: 2008/09/12_16:12:23 debug: 	has_quorum=1
ccm[30004]: 2008/09/12_16:12:23 debug: 	nodename=node02 bornon=1
cib[30031]: 2008/09/12_16:12:23 info: write_cib_contents: Wrote version 0.2.1 of the CIB to disk (digest: 4752ce8f7b030c046827f3d45ca563bc)
cib[30031]: 2008/09/12_16:12:23 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
cib[30005]: 2008/09/12_16:12:23 info: Managed write_cib_contents process 30031 exited with return code 0.
ccm[30004]: 2008/09/12_16:12:24 debug: recv msg CCM_TYPE_ALIVE from node01, status:[null ptr]
ccm[30004]: 2008/09/12_16:12:24 debug: quorum plugin: majority
ccm[30004]: 2008/09/12_16:12:24 debug: cluster:linux-ha, member_count=2, member_quorum_votes=200
ccm[30004]: 2008/09/12_16:12:24 debug: total_node_count=2, total_quorum_votes=200
ccm[30004]: 2008/09/12_16:12:24 debug: send msg CCM_TYPE_MEM_LIST to cluster, status:[null]
cib[30005]: 2008/09/12_16:12:24 info: mem_handle_event: Got an event OC_EV_MS_INVALID from ccm
ccm[30004]: 2008/09/12_16:12:24 debug: dump current membership 0xb7ee3008
cib[30005]: 2008/09/12_16:12:24 info: mem_handle_event: no mbr_track info
ccm[30004]: 2008/09/12_16:12:24 debug: 	leader=node02
cib[30005]: 2008/09/12_16:12:24 info: mem_handle_event: Got an event OC_EV_MS_NEW_MEMBERSHIP from ccm
ccm[30004]: 2008/09/12_16:12:24 debug: 	transition=2
cib[30005]: 2008/09/12_16:12:24 info: mem_handle_event: instance=2, nodes=2, new=1, lost=0, n_idx=0, new_idx=2, old_idx=4
ccm[30004]: 2008/09/12_16:12:24 debug: 	status=CCM_STATE_JOINED
cib[30005]: 2008/09/12_16:12:24 info: cib_ccm_msg_callback: Processing CCM event=NEW MEMBERSHIP (id=2)
ccm[30004]: 2008/09/12_16:12:24 debug: 	has_quorum=1
cib[30005]: 2008/09/12_16:12:24 info: crm_update_peer: Node node01: id=0 state=member (new) addr=(null) votes=-1 born=2 seen=2 proc=00000000000000000000000000000100
ccm[30004]: 2008/09/12_16:12:24 debug: 	nodename=node02 bornon=1
cib[30005]: 2008/09/12_16:12:24 info: crm_update_peer_proc: node01.ais is now online
ccm[30004]: 2008/09/12_16:12:24 debug: 	nodename=node01 bornon=2
ccm[30004]: 2008/09/12_16:12:24 debug: quorum is 1
ccm[30004]: 2008/09/12_16:12:24 debug: delivering new membership to 2 clients: 
ccm[30004]: 2008/09/12_16:12:24 debug: client: pid =30009
ccm[30004]: 2008/09/12_16:12:24 debug: client: pid =30005
crmd[30009]: 2008/09/12_16:12:24 debug: do_cl_join_query: Querying for a DC
crmd[30009]: 2008/09/12_16:12:24 debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_START
crmd[30009]: 2008/09/12_16:12:24 debug: crm_timer_start: Started Election Trigger (I_DC_TIMEOUT:48000ms), src=14
crmd[30009]: 2008/09/12_16:12:24 info: mem_handle_event: Got an event OC_EV_MS_INVALID from ccm
crmd[30009]: 2008/09/12_16:12:24 info: mem_handle_event: no mbr_track info
crmd[30009]: 2008/09/12_16:12:24 info: mem_handle_event: Got an event OC_EV_MS_NEW_MEMBERSHIP from ccm
crmd[30009]: 2008/09/12_16:12:24 info: mem_handle_event: instance=2, nodes=2, new=1, lost=0, n_idx=0, new_idx=2, old_idx=4
crmd[30009]: 2008/09/12_16:12:24 info: crmd_ccm_msg_callback: Quorum (re)attained after event=NEW MEMBERSHIP (id=2)
crmd[30009]: 2008/09/12_16:12:24 info: ccm_event_detail: NEW MEMBERSHIP: trans=2, nodes=2, new=1, lost=0 n_idx=0, new_idx=2, old_idx=4
crmd[30009]: 2008/09/12_16:12:24 info: ccm_event_detail: 	CURRENT: node02 [nodeid=1, born=1]
crmd[30009]: 2008/09/12_16:12:24 info: ccm_event_detail: 	CURRENT: node01 [nodeid=0, born=2]
crmd[30009]: 2008/09/12_16:12:24 info: ccm_event_detail: 	NEW:     node01 [nodeid=0, born=2]
crmd[30009]: 2008/09/12_16:12:24 info: crm_update_peer: Node node01: id=0 state=member (new) addr=(null) votes=-1 born=2 seen=2 proc=00000000000000000000000000000200
crmd[30009]: 2008/09/12_16:12:24 info: crm_update_peer_proc: node01.ais is now online
crmd[30009]: 2008/09/12_16:12:24 debug: post_cache_update: Updated cache after membership event 2.
ccm[30004]: 2008/09/12_16:12:24 debug: recv msg CCM_TYPE_MEM_LIST from node02, status:[null ptr]
ccm[30004]: 2008/09/12_16:12:24 WARN: ccm_state_joined: received message with unknown cookie, just dropping
ccm[30004]: 2008/09/12_16:12:24 debug: dump current membership 0xb7ee3008
ccm[30004]: 2008/09/12_16:12:24 debug: 	leader=node02
ccm[30004]: 2008/09/12_16:12:24 debug: 	transition=2
ccm[30004]: 2008/09/12_16:12:24 debug: 	status=CCM_STATE_JOINED
ccm[30004]: 2008/09/12_16:12:24 debug: 	has_quorum=1
ccm[30004]: 2008/09/12_16:12:24 debug: 	nodename=node02 bornon=1
ccm[30004]: 2008/09/12_16:12:24 debug: 	nodename=node01 bornon=2
cib[30005]: 2008/09/12_16:12:25 WARN: cib_process_diff: Diff 0.1.1 -> 0.2.1 not applied to 0.2.1: current "epoch" is greater than required
attrd[30008]: 2008/09/12_16:12:27 info: main: Starting mainloop...
crmd[30009]: 2008/09/12_16:13:12 info: crm_timer_popped: Election Trigger (I_DC_TIMEOUT) just popped!
crmd[30009]: 2008/09/12_16:13:12 debug: s_crmd_fsa: Processing I_DC_TIMEOUT: [ state=S_PENDING cause=C_TIMER_POPPED origin=crm_timer_popped ]
crmd[30009]: 2008/09/12_16:13:12 debug: do_fsa_action: actions:trace: 	// A_WARN  
crmd[30009]: 2008/09/12_16:13:12 WARN: do_log: FSA: Input I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING
crmd[30009]: 2008/09/12_16:13:12 info: do_state_transition: State transition S_PENDING -> S_ELECTION [ input=I_DC_TIMEOUT cause=C_TIMER_POPPED origin=crm_timer_popped ]
crmd[30009]: 2008/09/12_16:13:12 debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
crmd[30009]: 2008/09/12_16:13:12 debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
crmd[30009]: 2008/09/12_16:13:12 debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
crmd[30009]: 2008/09/12_16:13:12 debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
crmd[30009]: 2008/09/12_16:13:12 debug: do_election_vote: Destroying voted hash
crmd[30009]: 2008/09/12_16:13:12 debug: crm_timer_start: Started Election Timeout (I_ELECTION_DC:120000ms), src=15
crmd[30009]: 2008/09/12_16:13:12 debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
crmd[30009]: 2008/09/12_16:13:12 debug: do_election_count_vote: Created voted hash
crmd[30009]: 2008/09/12_16:13:12 debug: do_election_count_vote: Election 2, owner: 780b5e1a-a0cd-45c0-a56a-dbb0938394ae
crmd[30009]: 2008/09/12_16:13:12 info: do_election_count_vote: Updated voted hash for node02 to vote
crmd[30009]: 2008/09/12_16:13:12 info: do_election_count_vote: Election ignore: our vote (node02)
crmd[30009]: 2008/09/12_16:13:12 debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
crmd[30009]: 2008/09/12_16:13:12 info: do_election_check: Still waiting on 1 non-votes (2 total)
crmd[30009]: 2008/09/12_16:13:13 debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
crmd[30009]: 2008/09/12_16:13:13 debug: do_election_count_vote: Election 2, owner: 780b5e1a-a0cd-45c0-a56a-dbb0938394ae
crmd[30009]: 2008/09/12_16:13:13 info: do_election_count_vote: Updated voted hash for node01 to no-vote
cib[30005]: 2008/09/12_16:13:13 info: cib_common_callback_worker: Setting cib_diff_notify callbacks for 30009 (45d81afc-4757-4582-a039-8e89dcefed4f): on
crmd[30009]: 2008/09/12_16:13:13 info: do_election_count_vote: Election ignore: no-vote from node01
crmd[30009]: 2008/09/12_16:13:13 debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
crmd[30009]: 2008/09/12_16:13:13 debug: do_election_check: Destroying voted hash
crmd[30009]: 2008/09/12_16:13:13 debug: s_crmd_fsa: Processing I_ELECTION_DC: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_election_check ]
crmd[30009]: 2008/09/12_16:13:13 debug: do_fsa_action: actions:trace: 	// A_LOG   
crmd[30009]: 2008/09/12_16:13:13 info: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
crmd[30009]: 2008/09/12_16:13:13 debug: do_fsa_action: actions:trace: 	// A_TE_START
crmd[30009]: 2008/09/12_16:13:13 info: do_te_control: Registering TE UUID: c2c5eded-35e0-433b-bdc7-39bfc5fdc0b0
crmd[30009]: 2008/09/12_16:13:13 info: G_main_add_TriggerHandler: Added signal manual handler
crmd[30009]: 2008/09/12_16:13:13 info: G_main_add_TriggerHandler: Added signal manual handler
crmd[30009]: 2008/09/12_16:13:13 WARN: cib_client_add_notify_callback: Callback already present
crmd[30009]: 2008/09/12_16:13:13 info: set_graph_functions: Setting custom graph functions
crmd[30009]: 2008/09/12_16:13:13 info: unpack_graph: Unpacked transition -1: 0 actions in 0 synapses
crmd[30009]: 2008/09/12_16:13:13 debug: do_te_control: Transitioner is now active
crmd[30009]: 2008/09/12_16:13:13 debug: do_fsa_action: actions:trace: 	// A_PE_START
crmd[30009]: 2008/09/12_16:13:13 info: start_subsystem: Starting sub-system "pengine"
crmd[30168]: 2008/09/12_16:13:13 debug: start_subsystem: Executing "/usr/lib/heartbeat/pengine (pengine)" (pid 30168)
pengine[30168]: 2008/09/12_16:13:13 info: G_main_add_SignalHandler: Added signal handler for signal 15
pengine[30168]: 2008/09/12_16:13:13 debug: main: Init server comms
pengine[30168]: 2008/09/12_16:13:13 debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/heartbeat/crm/pengine
pengine[30168]: 2008/09/12_16:13:13 debug: init_client_ipc_comms_nodispatch: Could not init comms on: /var/run/heartbeat/crm/pengine
pengine[30168]: 2008/09/12_16:13:13 info: main: Starting pengine
crmd[30009]: 2008/09/12_16:13:16 debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/heartbeat/crm/pengine
crmd[30009]: 2008/09/12_16:13:16 debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
crmd[30009]: 2008/09/12_16:13:16 debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_START
crmd[30009]: 2008/09/12_16:13:16 debug: crm_timer_start: Started Integration Timer (I_INTEGRATED:180000ms), src=19
crmd[30009]: 2008/09/12_16:13:16 debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
crmd[30009]: 2008/09/12_16:13:16 debug: do_fsa_action: actions:trace: 	// A_DC_TAKEOVER
crmd[30009]: 2008/09/12_16:13:16 info: do_dc_takeover: Taking over DC status for this partition
cib[30005]: 2008/09/12_16:13:16 info: cib_process_readwrite: We are now in R/W mode
cib[30005]: 2008/09/12_16:13:16 info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/b37880f5-7c6f-410b-ab76-2ba745c8b625/7): ok (rc=0)
cib[30005]: 2008/09/12_16:13:16 debug: activateCibXml: Triggering CIB write for cib_modify op
cib[30005]: 2008/09/12_16:13:16 debug: log_data_element: Dumping cib
cib[30005]: 2008/09/12_16:13:16 info: log_data_element: cib:diff: - <cib epoch="2" num_updates="1" />
cib[30005]: 2008/09/12_16:13:16 debug: log_data_element: Dumping cib
cib[30005]: 2008/09/12_16:13:16 info: log_data_element: cib:diff: + <cib crm_feature_set="3.0" epoch="3" num_updates="1" />
cib[30005]: 2008/09/12_16:13:16 info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/b37880f5-7c6f-410b-ab76-2ba745c8b625/8): ok (rc=0)
cib[30005]: 2008/09/12_16:13:16 debug: cib_process_xpath: cib_query: //cib/configuration/crm_config//nvpair[@name='dc-version'] does not exist
cib[30005]: 2008/09/12_16:13:16 debug: Forking temp process write_cib_contents
cib[30005]: 2008/09/12_16:13:16 info: cib_process_xpath: Processing cib_query op for /cib (/cib)
crmd[30009]: 2008/09/12_16:13:16 debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_OFFER_ALL
crmd[30009]: 2008/09/12_16:13:16 debug: initialize_join: join-1: Initializing join data (flag=true)
crmd[30009]: 2008/09/12_16:13:16 info: join_make_offer: Making join offers based on membership 2
crmd[30009]: 2008/09/12_16:13:16 debug: join_make_offer: join-1: Sending offer to node01
crmd[30009]: 2008/09/12_16:13:16 debug: join_make_offer: join-1: Sending offer to node02
crmd[30009]: 2008/09/12_16:13:16 info: do_dc_join_offer_all: join-1: Waiting on 2 outstanding join acks
crmd[30009]: 2008/09/12_16:13:16 debug: fsa_dump_inputs: Added input: 0000000000000001 (R_THE_DC)
crmd[30009]: 2008/09/12_16:13:16 debug: fsa_dump_inputs: Added input: 0000000000000010 (R_JOIN_OK)
crmd[30009]: 2008/09/12_16:13:16 debug: fsa_dump_inputs: Added input: 0000000000000080 (R_INVOKE_PE)
crmd[30009]: 2008/09/12_16:13:16 debug: fsa_dump_inputs: Added input: 0000000000000200 (R_PE_CONNECTED)
crmd[30009]: 2008/09/12_16:13:16 debug: fsa_dump_inputs: Added input: 0000000000000400 (R_TE_CONNECTED)
crmd[30009]: 2008/09/12_16:13:16 debug: fsa_dump_inputs: Added input: 0000000000002000 (R_PE_REQUIRED)
crmd[30009]: 2008/09/12_16:13:16 debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
crmd[30009]: 2008/09/12_16:13:16 debug: do_election_count_vote: Created voted hash
crmd[30009]: 2008/09/12_16:13:16 debug: do_election_count_vote: Election 2, owner: d0cced4b-51d6-4456-9a82-1a01cd8e6cc3
crmd[30009]: 2008/09/12_16:13:16 info: do_election_count_vote: Election check: vote from node01
crmd[30009]: 2008/09/12_16:13:16 info: do_election_count_vote: Election 2 won over node01: born_on
crmd[30009]: 2008/09/12_16:13:16 debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
crmd[30009]: 2008/09/12_16:13:16 debug: do_election_check: Ignore election check: we not in an election
crmd[30009]: 2008/09/12_16:13:16 debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_INTEGRATION cause=C_FSA_INTERNAL origin=do_election_count_vote ]
crmd[30009]: 2008/09/12_16:13:16 info: do_state_transition: State transition S_INTEGRATION -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_election_count_vote ]
crmd[30009]: 2008/09/12_16:13:16 debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
crmd[30009]: 2008/09/12_16:13:16 debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
crmd[30009]: 2008/09/12_16:13:16 debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
crmd[30009]: 2008/09/12_16:13:16 debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
crmd[30009]: 2008/09/12_16:13:16 debug: do_election_vote: Destroying voted hash
crmd[30009]: 2008/09/12_16:13:16 debug: crm_timer_start: Started Election Timeout (I_ELECTION_DC:120000ms), src=23
crmd[30009]: 2008/09/12_16:13:16 info: te_connect_stonith: Attempting connection to fencing daemon...
cib[30005]: 2008/09/12_16:13:16 debug: activateCibXml: Triggering CIB write for cib_modify op
cib[30005]: 2008/09/12_16:13:16 debug: log_data_element: Dumping cib
cib[30005]: 2008/09/12_16:13:16 info: log_data_element: cib:diff: - <cib epoch="3" num_updates="1" />
cib[30005]: 2008/09/12_16:13:16 debug: log_data_element: Dumping cib
cib[30005]: 2008/09/12_16:13:16 info: log_data_element: cib:diff: + <cib epoch="4" num_updates="1" >
cib[30005]: 2008/09/12_16:13:16 debug: log_data_element: Dumping configuration
cib[30005]: 2008/09/12_16:13:16 info: log_data_element: cib:diff: +   <configuration >
cib[30005]: 2008/09/12_16:13:16 debug: log_data_element: Dumping crm_config
cib[30005]: 2008/09/12_16:13:16 info: log_data_element: cib:diff: +     <crm_config >
cib[30005]: 2008/09/12_16:13:16 debug: log_data_element: Dumping cluster_property_set
cib[30005]: 2008/09/12_16:13:16 info: log_data_element: cib:diff: +       <cluster_property_set id="cib-bootstrap-options" >
cib[30005]: 2008/09/12_16:13:16 debug: log_data_element: Dumping attributes
cib[30005]: 2008/09/12_16:13:16 info: log_data_element: cib:diff: +         <attributes >
cib[30005]: 2008/09/12_16:13:16 debug: log_data_element: Dumping nvpair
cib[30005]: 2008/09/12_16:13:16 info: log_data_element: cib:diff: +           <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="0.7.1-Unknown" __crm_diff_marker__="added:top" />
cib[30005]: 2008/09/12_16:13:16 info: log_data_element: cib:diff: +         </attributes>
cib[30005]: 2008/09/12_16:13:16 info: log_data_element: cib:diff: +       </cluster_property_set>
cib[30005]: 2008/09/12_16:13:16 info: log_data_element: cib:diff: +     </crm_config>
cib[30005]: 2008/09/12_16:13:16 info: log_data_element: cib:diff: +   </configuration>
cib[30005]: 2008/09/12_16:13:16 info: log_data_element: cib:diff: + </cib>
cib[30005]: 2008/09/12_16:13:16 info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/b37880f5-7c6f-410b-ab76-2ba745c8b625/11): ok (rc=0)
cib[30178]: 2008/09/12_16:13:16 info: write_cib_contents: Wrote version 0.3.1 of the CIB to disk (digest: e69c83387df92e3729effb263fb4d065)
cib[30178]: 2008/09/12_16:13:16 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
cib[30005]: 2008/09/12_16:13:16 info: Managed write_cib_contents process 30178 exited with return code 0.
cib[30005]: 2008/09/12_16:13:16 debug: Forking temp process write_cib_contents
cib[30179]: 2008/09/12_16:13:16 info: write_cib_contents: Wrote version 0.4.1 of the CIB to disk (digest: 6be0680fe2592d96629263078432427f)
cib[30179]: 2008/09/12_16:13:16 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
cib[30005]: 2008/09/12_16:13:16 info: Managed write_cib_contents process 30179 exited with return code 0.
crmd[30009]: 2008/09/12_16:13:17 debug: stonithd_signon: creating connection
crmd[30009]: 2008/09/12_16:13:17 debug: sending out the signon msg.
crmd[30009]: 2008/09/12_16:13:17 debug: signed on to stonithd.
stonithd[30007]: 2008/09/12_16:13:17 debug: client tengine (pid=30009) succeeded to signon to stonithd.
crmd[30009]: 2008/09/12_16:13:17 info: te_connect_stonith: Connected
crmd[30009]: 2008/09/12_16:13:17 debug: handle_request: Raising I_JOIN_OFFER: join-1
crmd[30009]: 2008/09/12_16:13:17 debug: s_crmd_fsa: Processing I_JOIN_OFFER: [ state=S_ELECTION cause=C_HA_MESSAGE origin=route_message ]
crmd[30009]: 2008/09/12_16:13:17 debug: do_fsa_action: actions:trace: 	// A_WARN  
crmd[30009]: 2008/09/12_16:13:17 WARN: do_log: FSA: Input I_JOIN_OFFER from route_message() received in state S_ELECTION
crmd[30009]: 2008/09/12_16:13:17 debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
crmd[30009]: 2008/09/12_16:13:17 debug: do_election_vote: Destroying voted hash
crmd[30009]: 2008/09/12_16:13:17 debug: crm_timer_start: Election Timeout (I_ELECTION_DC:120000ms) already running: src=23
crmd[30009]: 2008/09/12_16:13:17 debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
crmd[30009]: 2008/09/12_16:13:17 debug: do_election_count_vote: Created voted hash
crmd[30009]: 2008/09/12_16:13:17 debug: do_election_count_vote: Election 3, owner: 780b5e1a-a0cd-45c0-a56a-dbb0938394ae
crmd[30009]: 2008/09/12_16:13:17 debug: do_election_count_vote: Ignore old 'vote' from node02: 3 vs. 4
crmd[30009]: 2008/09/12_16:13:17 debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
crmd[30009]: 2008/09/12_16:13:17 info: do_election_check: Still waiting on 2 non-votes (2 total)
crmd[30009]: 2008/09/12_16:13:17 debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
crmd[30009]: 2008/09/12_16:13:17 debug: do_election_count_vote: Election 3, owner: d0cced4b-51d6-4456-9a82-1a01cd8e6cc3
crmd[30009]: 2008/09/12_16:13:17 info: do_election_count_vote: Election check: vote from node01
crmd[30009]: 2008/09/12_16:13:17 info: do_election_count_vote: Election 3 won over node01: born_on
crmd[30009]: 2008/09/12_16:13:17 debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
crmd[30009]: 2008/09/12_16:13:17 info: do_election_check: Still waiting on 2 non-votes (2 total)
crmd[30009]: 2008/09/12_16:13:17 debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
crmd[30009]: 2008/09/12_16:13:17 debug: do_election_count_vote: Created voted hash
crmd[30009]: 2008/09/12_16:13:17 debug: do_election_count_vote: Election 3, owner: 780b5e1a-a0cd-45c0-a56a-dbb0938394ae
crmd[30009]: 2008/09/12_16:13:17 debug: do_election_count_vote: Ignore old 'no-vote' from node01: 3 vs. 4
crmd[30009]: 2008/09/12_16:13:17 debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
crmd[30009]: 2008/09/12_16:13:17 info: do_election_check: Still waiting on 2 non-votes (2 total)
crmd[30009]: 2008/09/12_16:13:17 debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_election_count_vote ]
crmd[30009]: 2008/09/12_16:13:17 debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
crmd[30009]: 2008/09/12_16:13:17 debug: do_election_vote: Destroying voted hash
crmd[30009]: 2008/09/12_16:13:17 debug: crm_timer_start: Election Timeout (I_ELECTION_DC:120000ms) already running: src=23
crmd[30009]: 2008/09/12_16:13:18 debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
crmd[30009]: 2008/09/12_16:13:18 debug: do_election_count_vote: Created voted hash
crmd[30009]: 2008/09/12_16:13:18 debug: do_election_count_vote: Election 4, owner: 780b5e1a-a0cd-45c0-a56a-dbb0938394ae
crmd[30009]: 2008/09/12_16:13:18 debug: do_election_count_vote: Ignore old 'vote' from node02: 4 vs. 5
crmd[30009]: 2008/09/12_16:13:18 debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
crmd[30009]: 2008/09/12_16:13:18 info: do_election_check: Still waiting on 2 non-votes (2 total)
crmd[30009]: 2008/09/12_16:13:18 debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
crmd[30009]: 2008/09/12_16:13:18 debug: do_election_count_vote: Election 5, owner: 780b5e1a-a0cd-45c0-a56a-dbb0938394ae
crmd[30009]: 2008/09/12_16:13:18 info: do_election_count_vote: Updated voted hash for node02 to vote
crmd[30009]: 2008/09/12_16:13:18 info: do_election_count_vote: Election ignore: our vote (node02)
crmd[30009]: 2008/09/12_16:13:18 debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
crmd[30009]: 2008/09/12_16:13:18 info: do_election_check: Still waiting on 1 non-votes (2 total)
crmd[30009]: 2008/09/12_16:13:18 debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
crmd[30009]: 2008/09/12_16:13:18 debug: do_election_count_vote: Election 4, owner: 780b5e1a-a0cd-45c0-a56a-dbb0938394ae
crmd[30009]: 2008/09/12_16:13:18 debug: do_election_count_vote: Ignore old 'no-vote' from node01: 4 vs. 5
crmd[30009]: 2008/09/12_16:13:18 debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
crmd[30009]: 2008/09/12_16:13:18 info: do_election_check: Still waiting on 1 non-votes (2 total)
crmd[30009]: 2008/09/12_16:13:18 debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
crmd[30009]: 2008/09/12_16:13:18 debug: do_election_count_vote: Election 5, owner: 780b5e1a-a0cd-45c0-a56a-dbb0938394ae
crmd[30009]: 2008/09/12_16:13:18 info: do_election_count_vote: Updated voted hash for node01 to no-vote
crmd[30009]: 2008/09/12_16:13:18 info: do_election_count_vote: Election ignore: no-vote from node01
crmd[30009]: 2008/09/12_16:13:18 debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
crmd[30009]: 2008/09/12_16:13:18 debug: do_election_check: Destroying voted hash
crmd[30009]: 2008/09/12_16:13:18 debug: s_crmd_fsa: Processing I_ELECTION_DC: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_election_check ]
crmd[30009]: 2008/09/12_16:13:18 debug: do_fsa_action: actions:trace: 	// A_LOG   
crmd[30009]: 2008/09/12_16:13:18 info: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
crmd[30009]: 2008/09/12_16:13:18 debug: do_fsa_action: actions:trace: 	// A_TE_START
crmd[30009]: 2008/09/12_16:13:18 debug: do_te_control: The transitioner is already active
crmd[30009]: 2008/09/12_16:13:18 debug: do_fsa_action: actions:trace: 	// A_PE_START
crmd[30009]: 2008/09/12_16:13:18 info: start_subsystem: Starting sub-system "pengine"
crmd[30009]: 2008/09/12_16:13:18 WARN: start_subsystem: Client pengine already running as pid 30168
crmd[30009]: 2008/09/12_16:13:21 debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/heartbeat/crm/pengine
crmd[30009]: 2008/09/12_16:13:21 debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
crmd[30009]: 2008/09/12_16:13:21 debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_START
cib[30005]: 2008/09/12_16:13:21 info: cib_process_readwrite: We are now in R/O mode
crmd[30009]: 2008/09/12_16:13:21 debug: crm_timer_start: Started Integration Timer (I_INTEGRATED:180000ms), src=26
cib[30005]: 2008/09/12_16:13:21 info: cib_process_request: Operation complete: op cib_slave_all for section 'all' (origin=local/b37880f5-7c6f-410b-ab76-2ba745c8b625/12): ok (rc=0)
crmd[30009]: 2008/09/12_16:13:21 debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
cib[30005]: 2008/09/12_16:13:21 info: cib_process_readwrite: We are now in R/W mode
crmd[30009]: 2008/09/12_16:13:21 debug: do_fsa_action: actions:trace: 	// A_DC_TAKEOVER
crmd[30009]: 2008/09/12_16:13:21 info: do_dc_takeover: Taking over DC status for this partition
cib[30005]: 2008/09/12_16:13:21 info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/b37880f5-7c6f-410b-ab76-2ba745c8b625/13): ok (rc=0)
cib[30005]: 2008/09/12_16:13:21 info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/b37880f5-7c6f-410b-ab76-2ba745c8b625/14): ok (rc=0)
cib[30005]: 2008/09/12_16:13:21 info: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//nvpair[@name='dc-version'] (/cib/configuration/crm_config/cluster_property_set/attributes/nvpair[6])
crmd[30009]: 2008/09/12_16:13:21 debug: log_data_element: Dumping nvpair
cib[30005]: 2008/09/12_16:13:21 info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/b37880f5-7c6f-410b-ab76-2ba745c8b625/16): ok (rc=0)
crmd[30009]: 2008/09/12_16:13:21 debug: log_data_element: find_nvpair_attr: Match <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="0.7.1-Unknown" />
crmd[30009]: 2008/09/12_16:13:21 debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_OFFER_ALL
crmd[30009]: 2008/09/12_16:13:21 debug: initialize_join: join-2: Initializing join data (flag=true)
crmd[30009]: 2008/09/12_16:13:21 debug: join_make_offer: join-2: Sending offer to node01
crmd[30009]: 2008/09/12_16:13:21 debug: join_make_offer: join-2: Sending offer to node02
crmd[30009]: 2008/09/12_16:13:21 info: do_dc_join_offer_all: join-2: Waiting on 2 outstanding join acks
crmd[30009]: 2008/09/12_16:13:22 debug: handle_request: Raising I_JOIN_OFFER: join-2
crmd[30009]: 2008/09/12_16:13:22 debug: s_crmd_fsa: Processing I_JOIN_OFFER: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
crmd[30009]: 2008/09/12_16:13:22 debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_REQUEST
crmd[30009]: 2008/09/12_16:13:22 info: update_dc: Set DC to node02 (3.0)
crmd[30009]: 2008/09/12_16:13:22 debug: do_cl_join_offer_respond: do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
crmd[30009]: 2008/09/12_16:13:22 debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
crmd[30009]: 2008/09/12_16:13:22 debug: join_query_callback: Respond to join offer join-2
crmd[30009]: 2008/09/12_16:13:22 debug: join_query_callback: Acknowledging node02 as our DC
crmd[30009]: 2008/09/12_16:13:22 debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
crmd[30009]: 2008/09/12_16:13:22 debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_REQ
crmd[30009]: 2008/09/12_16:13:22 debug: do_dc_join_filter_offer: Processing req from node02
crmd[30009]: 2008/09/12_16:13:22 debug: do_dc_join_filter_offer: join-2: Welcoming node node02 (ref join_request-crmd-1221203602-11)
crmd[30009]: 2008/09/12_16:13:22 debug: do_dc_join_filter_offer: 1 nodes have been integrated into join-2
crmd[30009]: 2008/09/12_16:13:22 debug: check_join_state: Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
crmd[30009]: 2008/09/12_16:13:22 debug: do_dc_join_filter_offer: join-2: Still waiting on 1 outstanding offers
crmd[30009]: 2008/09/12_16:13:23 debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
crmd[30009]: 2008/09/12_16:13:23 debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_REQ
crmd[30009]: 2008/09/12_16:13:23 debug: do_dc_join_filter_offer: Processing req from node01
crmd[30009]: 2008/09/12_16:13:23 debug: do_dc_join_filter_offer: join-2: Welcoming node node01 (ref join_request-crmd-1221203602-8)
crmd[30009]: 2008/09/12_16:13:23 debug: do_dc_join_filter_offer: 2 nodes have been integrated into join-2
crmd[30009]: 2008/09/12_16:13:23 debug: check_join_state: Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
crmd[30009]: 2008/09/12_16:13:23 debug: check_join_state: join-2: Integration of 2 peers complete: do_dc_join_filter_offer
crmd[30009]: 2008/09/12_16:13:23 debug: s_crmd_fsa: Processing I_INTEGRATED: [ state=S_INTEGRATION cause=C_FSA_INTERNAL origin=check_join_state ]
crmd[30009]: 2008/09/12_16:13:23 info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
crmd[30009]: 2008/09/12_16:13:23 info: do_state_transition: All 2 cluster nodes responded to the join offer.
crmd[30009]: 2008/09/12_16:13:23 debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
crmd[30009]: 2008/09/12_16:13:23 debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
crmd[30009]: 2008/09/12_16:13:23 debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_START
crmd[30009]: 2008/09/12_16:13:23 debug: crm_timer_start: Started Finalization Timer (I_ELECTION:1800000ms), src=30
crmd[30009]: 2008/09/12_16:13:23 debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_FINALIZE
crmd[30009]: 2008/09/12_16:13:23 debug: do_dc_join_finalize: Finializing join-2 for 2 clients
crmd[30009]: 2008/09/12_16:13:23 info: update_attrd: Connecting to attrd...
crmd[30009]: 2008/09/12_16:13:23 debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/heartbeat/crm/attrd
crmd[30009]: 2008/09/12_16:13:23 debug: update_attrd: sent attrd refresh
crmd[30009]: 2008/09/12_16:13:23 debug: check_join_state: Invoked by do_dc_join_finalize in state: S_FINALIZE_JOIN
crmd[30009]: 2008/09/12_16:13:23 debug: check_join_state: join-2: Still waiting on 2 integrated nodes
crmd[30009]: 2008/09/12_16:13:23 debug: finalize_join: Notifying 2 clients of join-2 results
crmd[30009]: 2008/09/12_16:13:23 debug: finalize_join_for: join-2: ACK'ing join request from node01, state member
crmd[30009]: 2008/09/12_16:13:23 debug: finalize_join_for: join-2: ACK'ing join request from node02, state member
attrd[30008]: 2008/09/12_16:13:23 info: attrd_local_callback: Sending full refresh
crmd[30009]: 2008/09/12_16:13:23 debug: fsa_dump_inputs: Added input: 0000000000020000 (R_HAVE_CIB)
cib[30005]: 2008/09/12_16:13:23 info: sync_our_cib: Syncing CIB to all peers
cib[30005]: 2008/09/12_16:13:23 info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/b37880f5-7c6f-410b-ab76-2ba745c8b625/18): ok (rc=0)
cib[30005]: 2008/09/12_16:13:23 debug: activateCibXml: Triggering CIB write for cib_modify op
cib[30005]: 2008/09/12_16:13:23 debug: log_data_element: Dumping cib
cib[30005]: 2008/09/12_16:13:23 info: log_data_element: cib:diff: - <cib epoch="4" num_updates="1" />
cib[30005]: 2008/09/12_16:13:23 debug: log_data_element: Dumping cib
cib[30005]: 2008/09/12_16:13:23 info: log_data_element: cib:diff: + <cib dc-uuid="780b5e1a-a0cd-45c0-a56a-dbb0938394ae" epoch="5" num_updates="1" />
cib[30005]: 2008/09/12_16:13:23 info: cib_process_request: Operation complete: op cib_modify for section 'all' (origin=local/b37880f5-7c6f-410b-ab76-2ba745c8b625/19): ok (rc=0)
cib[30005]: 2008/09/12_16:13:23 debug: activateCibXml: Triggering CIB write for cib_modify op
cib[30005]: 2008/09/12_16:13:23 debug: log_data_element: Dumping cib
cib[30005]: 2008/09/12_16:13:23 info: log_data_element: cib:diff: - <cib epoch="5" num_updates="1" />
cib[30005]: 2008/09/12_16:13:23 debug: log_data_element: Dumping cib
cib[30005]: 2008/09/12_16:13:23 info: log_data_element: cib:diff: + <cib epoch="6" num_updates="1" >
cib[30005]: 2008/09/12_16:13:23 debug: log_data_element: Dumping configuration
cib[30005]: 2008/09/12_16:13:23 info: log_data_element: cib:diff: +   <configuration >
cib[30005]: 2008/09/12_16:13:23 debug: log_data_element: Dumping nodes
cib[30005]: 2008/09/12_16:13:23 info: log_data_element: cib:diff: +     <nodes >
cib[30005]: 2008/09/12_16:13:23 debug: log_data_element: Dumping node
cib[30005]: 2008/09/12_16:13:23 info: log_data_element: cib:diff: +       <node id="d0cced4b-51d6-4456-9a82-1a01cd8e6cc3" uname="node01" type="normal" __crm_diff_marker__="added:top" />
cib[30005]: 2008/09/12_16:13:23 info: log_data_element: cib:diff: +     </nodes>
cib[30005]: 2008/09/12_16:13:23 info: log_data_element: cib:diff: +   </configuration>
cib[30005]: 2008/09/12_16:13:23 info: log_data_element: cib:diff: + </cib>
cib[30005]: 2008/09/12_16:13:23 info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/b37880f5-7c6f-410b-ab76-2ba745c8b625/20): ok (rc=0)
cib[30005]: 2008/09/12_16:13:23 debug: activateCibXml: Triggering CIB write for cib_modify op
cib[30005]: 2008/09/12_16:13:23 debug: log_data_element: Dumping cib
cib[30005]: 2008/09/12_16:13:23 info: log_data_element: cib:diff: - <cib epoch="6" num_updates="1" />
cib[30005]: 2008/09/12_16:13:23 debug: log_data_element: Dumping cib
cib[30005]: 2008/09/12_16:13:23 info: log_data_element: cib:diff: + <cib epoch="7" num_updates="1" >
cib[30005]: 2008/09/12_16:13:23 debug: log_data_element: Dumping configuration
cib[30005]: 2008/09/12_16:13:23 info: log_data_element: cib:diff: +   <configuration >
cib[30005]: 2008/09/12_16:13:23 debug: log_data_element: Dumping nodes
cib[30005]: 2008/09/12_16:13:23 info: log_data_element: cib:diff: +     <nodes >
cib[30005]: 2008/09/12_16:13:23 debug: log_data_element: Dumping node
cib[30005]: 2008/09/12_16:13:23 info: log_data_element: cib:diff: +       <node id="780b5e1a-a0cd-45c0-a56a-dbb0938394ae" uname="node02" type="normal" __crm_diff_marker__="added:top" />
cib[30005]: 2008/09/12_16:13:23 info: log_data_element: cib:diff: +     </nodes>
cib[30005]: 2008/09/12_16:13:23 info: log_data_element: cib:diff: +   </configuration>
cib[30005]: 2008/09/12_16:13:23 info: log_data_element: cib:diff: + </cib>
cib[30005]: 2008/09/12_16:13:23 info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/b37880f5-7c6f-410b-ab76-2ba745c8b625/21): ok (rc=0)
cib[30005]: 2008/09/12_16:13:23 debug: Forking temp process write_cib_contents
cib[30198]: 2008/09/12_16:13:23 info: write_cib_contents: Wrote version 0.7.1 of the CIB to disk (digest: 13c899822639589c46627a2ca03d9f29)
cib[30198]: 2008/09/12_16:13:23 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
cib[30005]: 2008/09/12_16:13:23 info: Managed write_cib_contents process 30198 exited with return code 0.
crmd[30009]: 2008/09/12_16:13:23 debug: handle_request: Raising I_JOIN_RESULT: join-2
crmd[30009]: 2008/09/12_16:13:23 debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
crmd[30009]: 2008/09/12_16:13:23 debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
crmd[30009]: 2008/09/12_16:13:23 info: update_dc: Set DC to node02 (3.0)
crmd[30009]: 2008/09/12_16:13:23 debug: do_cl_join_finalize_respond: Confirming join join-2: join_ack_nack
crmd[30009]: 2008/09/12_16:13:23 debug: do_cl_join_finalize_respond: join-2: Join complete.  Sending local LRM status to node02
crmd[30009]: 2008/09/12_16:13:23 debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
crmd[30009]: 2008/09/12_16:13:23 debug: do_dc_join_ack: Ignoring op=join_ack_nack message from node02
crmd[30009]: 2008/09/12_16:13:23 debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
crmd[30009]: 2008/09/12_16:13:23 debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
crmd[30009]: 2008/09/12_16:13:23 debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
crmd[30009]: 2008/09/12_16:13:23 info: do_dc_join_ack: join-2: Updating node state to member for node02
crmd[30009]: 2008/09/12_16:13:23 debug: erase_status_tag: Erasing //node_state[@uname='node02']/lrm
crmd[30009]: 2008/09/12_16:13:23 debug: do_dc_join_ack: join-2: Registered callback for LRM update 23
cib[30005]: 2008/09/12_16:13:23 debug: cib_process_xpath: //node_state[@uname='node02']/lrm was already removed
cib[30005]: 2008/09/12_16:13:23 info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node02']/lrm (origin=local/b37880f5-7c6f-410b-ab76-2ba745c8b625/22): ok (rc=0)
crmd[30009]: 2008/09/12_16:13:23 debug: join_update_complete_callback: Join update 23 complete
crmd[30009]: 2008/09/12_16:13:23 debug: check_join_state: Invoked by join_update_complete_callback in state: S_FINALIZE_JOIN
crmd[30009]: 2008/09/12_16:13:23 debug: check_join_state: join-2: Still waiting on 1 finalized nodes
crmd[30009]: 2008/09/12_16:13:24 debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
crmd[30009]: 2008/09/12_16:13:24 debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
crmd[30009]: 2008/09/12_16:13:24 debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
crmd[30009]: 2008/09/12_16:13:24 info: do_dc_join_ack: join-2: Updating node state to member for node01
crmd[30009]: 2008/09/12_16:13:24 debug: erase_status_tag: Erasing //node_state[@uname='node01']/lrm
crmd[30009]: 2008/09/12_16:13:24 debug: do_dc_join_ack: join-2: Registered callback for LRM update 25
cib[30005]: 2008/09/12_16:13:24 debug: cib_process_xpath: //node_state[@uname='node01']/lrm was already removed
cib[30005]: 2008/09/12_16:13:24 info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node01']/lrm (origin=local/b37880f5-7c6f-410b-ab76-2ba745c8b625/24): ok (rc=0)
crmd[30009]: 2008/09/12_16:13:24 debug: join_update_complete_callback: Join update 25 complete
crmd[30009]: 2008/09/12_16:13:24 debug: check_join_state: Invoked by join_update_complete_callback in state: S_FINALIZE_JOIN
crmd[30009]: 2008/09/12_16:13:24 debug: check_join_state: join-2 complete: join_update_complete_callback
crmd[30009]: 2008/09/12_16:13:24 debug: s_crmd_fsa: Processing I_FINALIZED: [ state=S_FINALIZE_JOIN cause=C_FSA_INTERNAL origin=check_join_state ]
crmd[30009]: 2008/09/12_16:13:24 info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
crmd[30009]: 2008/09/12_16:13:24 info: populate_cib_nodes_ha: Requesting the list of configured nodes
crmd[30009]: 2008/09/12_16:13:25 notice: populate_cib_nodes_ha: Node: node02 (uuid: 780b5e1a-a0cd-45c0-a56a-dbb0938394ae)
crmd[30009]: 2008/09/12_16:13:25 notice: populate_cib_nodes_ha: Node: node01 (uuid: d0cced4b-51d6-4456-9a82-1a01cd8e6cc3)
crmd[30009]: 2008/09/12_16:13:25 debug: ghash_update_cib_node: Updating node01: true (overwrite=true) hash_size=2
crmd[30009]: 2008/09/12_16:13:25 debug: ghash_update_cib_node: Updating node02: true (overwrite=true) hash_size=2
crmd[30009]: 2008/09/12_16:13:25 info: do_state_transition: All 2 cluster nodes are eligible to run resources.
crmd[30009]: 2008/09/12_16:13:25 debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
crmd[30009]: 2008/09/12_16:13:25 debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
crmd[30009]: 2008/09/12_16:13:25 debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
crmd[30009]: 2008/09/12_16:13:25 debug: do_fsa_action: actions:trace: 	// A_TE_CANCEL
crmd[30009]: 2008/09/12_16:13:25 debug: do_te_invoke: Cancelling the active Transition
crmd[30009]: 2008/09/12_16:13:25 info: abort_transition_graph: do_te_invoke:195 - Triggered graph processing (complete=1) : Peer Cancelled
crmd[30009]: 2008/09/12_16:13:25 info: print_xml_formatted: abort_transition_graph: Cause: NULL
crmd[30009]: 2008/09/12_16:13:25 debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
crmd[30009]: 2008/09/12_16:13:25 debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
crmd[30009]: 2008/09/12_16:13:25 debug: do_pe_invoke: Requesting the current CIB: S_POLICY_ENGINE
cib[30005]: 2008/09/12_16:13:25 info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/b37880f5-7c6f-410b-ab76-2ba745c8b625/26): ok (rc=0)
crmd[30009]: 2008/09/12_16:13:25 debug: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1221203605-15, seq=2, quorate=1
pengine[30168]: 2008/09/12_16:13:25 WARN: process_pe_message: Your current configuration only conforms to transitional-0.6
pengine[30168]: 2008/09/12_16:13:25 WARN: process_pe_message: Please use 'cibadmin --upgrade' to convert to pacemaker-0.7
pengine[30168]: 2008/09/12_16:13:25 debug: update_validation: Testing 'transitional-0.6' validation
pengine[30168]: 2008/09/12_16:13:25 notice: update_validation: Upgrading transitional-0.6-style configuration to pacemaker-0.7 with /usr/share/pacemaker/upgrade06.xsl
pengine[30168]: 2008/09/12_16:13:25 info: update_validation: Transformation /usr/share/pacemaker/upgrade06.xsl successful
pengine[30168]: 2008/09/12_16:13:25 notice: update_validation: Upgraded from transitional-0.6 to pacemaker-0.7 validation
pengine[30168]: 2008/09/12_16:13:25 WARN: process_pe_message: Your configuration was internally updated to pacemaker-0.7
pengine[30168]: 2008/09/12_16:13:25 debug: unpack_config: Default action timeout: 120s
pengine[30168]: 2008/09/12_16:13:25 debug: unpack_config: Default stickiness: 1000000
pengine[30168]: 2008/09/12_16:13:25 debug: unpack_config: Stop all active resources: false
pengine[30168]: 2008/09/12_16:13:25 debug: unpack_config: Default failure timeout: 0
pengine[30168]: 2008/09/12_16:13:25 debug: unpack_config: Default migration threshold: 0
pengine[30168]: 2008/09/12_16:13:25 debug: unpack_config: STONITH of failed nodes is enabled
pengine[30168]: 2008/09/12_16:13:25 debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
pengine[30168]: 2008/09/12_16:13:25 notice: unpack_config: On loss of CCM Quorum: Ignore
pengine[30168]: 2008/09/12_16:13:25 debug: get_xpath_object: No match for //cib/configuration/rsc_defaults
pengine[30168]: 2008/09/12_16:13:25 debug: get_xpath_object: No match for //cib/configuration/rsc_defaults
pengine[30168]: 2008/09/12_16:13:25 debug: get_xpath_object: No match for //cib/configuration/rsc_defaults
pengine[30168]: 2008/09/12_16:13:25 debug: get_xpath_object: No match for //cib/configuration/rsc_defaults
pengine[30168]: 2008/09/12_16:13:25 debug: get_xpath_object: No match for //cib/configuration/rsc_defaults
pengine[30168]: 2008/09/12_16:13:25 info: determine_online_status: Node node02 is online
pengine[30168]: 2008/09/12_16:13:25 info: determine_online_status: Node node01 is online
pengine[30168]: 2008/09/12_16:13:25 notice: native_print: dummy	(ocf::heartbeat:Dummy):	Stopped 
pengine[30168]: 2008/09/12_16:13:25 notice: clone_print: Clone Set: clnFencing
pengine[30168]: 2008/09/12_16:13:25 notice: native_print:     prmFencing:0	(stonith:external/sshTEST):	Stopped 
pengine[30168]: 2008/09/12_16:13:25 notice: native_print:     prmFencing:1	(stonith:external/sshTEST):	Stopped 
pengine[30168]: 2008/09/12_16:13:25 debug: native_assign_node: Assigning node01 to dummy
pengine[30168]: 2008/09/12_16:13:25 debug: native_assign_node: Assigning node02 to prmFencing:0
pengine[30168]: 2008/09/12_16:13:25 debug: native_assign_node: Assigning node01 to prmFencing:1
pengine[30168]: 2008/09/12_16:13:25 debug: clone_color: Allocated 2 clnFencing instances of a possible 2
pengine[30168]: 2008/09/12_16:13:25 debug: get_xpath_object: No match for //cib/configuration/op_defaults
pengine[30168]: 2008/09/12_16:13:25 debug: get_xpath_object: No match for //cib/configuration/op_defaults
pengine[30168]: 2008/09/12_16:13:25 debug: get_xpath_object: No match for //cib/configuration/op_defaults
pengine[30168]: 2008/09/12_16:13:25 debug: get_xpath_object: No match for //cib/configuration/op_defaults
pengine[30168]: 2008/09/12_16:13:25 debug: get_xpath_object: No match for //cib/configuration/op_defaults
pengine[30168]: 2008/09/12_16:13:25 notice: StartRsc:  node01	Start dummy
pengine[30168]: 2008/09/12_16:13:25 debug: get_xpath_object: No match for //cib/configuration/op_defaults
pengine[30168]: 2008/09/12_16:13:25 notice: RecurringOp:  Start recurring monitor (10s) for dummy on node01
pengine[30168]: 2008/09/12_16:13:25 debug: get_xpath_object: No match for //cib/configuration/op_defaults
pengine[30168]: 2008/09/12_16:13:25 notice: StartRsc:  node02	Start prmFencing:0
pengine[30168]: 2008/09/12_16:13:25 debug: get_xpath_object: No match for //cib/configuration/op_defaults
pengine[30168]: 2008/09/12_16:13:25 notice: RecurringOp:  Start recurring monitor (5s) for prmFencing:0 on node02
pengine[30168]: 2008/09/12_16:13:25 debug: get_xpath_object: No match for //cib/configuration/op_defaults
pengine[30168]: 2008/09/12_16:13:25 notice: StartRsc:  node01	Start prmFencing:1
pengine[30168]: 2008/09/12_16:13:25 debug: get_xpath_object: No match for //cib/configuration/op_defaults
pengine[30168]: 2008/09/12_16:13:25 notice: RecurringOp:  Start recurring monitor (5s) for prmFencing:1 on node01
pengine[30168]: 2008/09/12_16:13:25 debug: get_xpath_object: No match for //cib/configuration/op_defaults
pengine[30168]: 2008/09/12_16:13:25 debug: get_xpath_object: No match for //cib/configuration/op_defaults
pengine[30168]: 2008/09/12_16:13:25 debug: get_xpath_object: No match for //cib/configuration/op_defaults
pengine[30168]: 2008/09/12_16:13:25 debug: get_xpath_object: No match for //cib/configuration/op_defaults
pengine[30168]: 2008/09/12_16:13:25 debug: child_stopping_constraints: clnFencing has no active children
pengine[30168]: 2008/09/12_16:13:25 debug: get_xpath_object: No match for //cib/configuration/op_defaults
pengine[30168]: 2008/09/12_16:13:25 debug: get_xpath_object: No match for //cib/configuration/op_defaults
pengine[30168]: 2008/09/12_16:13:25 debug: get_xpath_object: No match for //cib/configuration/op_defaults
pengine[30168]: 2008/09/12_16:13:25 debug: get_last_sequence: Series file /var/lib/heartbeat/pengine/pe-input.last does not exist
crmd[30009]: 2008/09/12_16:13:25 debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
crmd[30009]: 2008/09/12_16:13:25 debug: do_fsa_action: actions:trace: 	// A_LOG   
crmd[30009]: 2008/09/12_16:13:25 info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
crmd[30009]: 2008/09/12_16:13:25 debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
crmd[30009]: 2008/09/12_16:13:25 debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
crmd[30009]: 2008/09/12_16:13:25 debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
crmd[30009]: 2008/09/12_16:13:25 debug: do_fsa_action: actions:trace: 	// A_TE_INVOKE
crmd[30009]: 2008/09/12_16:13:25 info: unpack_graph: Unpacked transition 0: 16 actions in 16 synapses
crmd[30009]: 2008/09/12_16:13:25 info: do_te_invoke: Processing graph 0 derived from /var/lib/heartbeat/pengine/pe-input-0.bz2
crmd[30009]: 2008/09/12_16:13:25 debug: start_global_timer: Starting abort timer: 60000ms
crmd[30009]: 2008/09/12_16:13:25 debug: initiate_action: Action 4: Increasing IDLE timer to 240000
crmd[30009]: 2008/09/12_16:13:25 info: send_rsc_command: Initiating action 4: monitor dummy_monitor_0 on node01
crmd[30009]: 2008/09/12_16:13:25 debug: send_rsc_command: Action 4: Increasing transition 0 timeout to 300000 (2*120000 + 60000)
crmd[30009]: 2008/09/12_16:13:25 info: send_rsc_command: Initiating action 7: monitor dummy_monitor_0 on node02
crmd[30009]: 2008/09/12_16:13:25 info: send_rsc_command: Initiating action 8: monitor prmFencing:0_monitor_0 on node02
crmd[30009]: 2008/09/12_16:13:25 info: send_rsc_command: Initiating action 5: monitor prmFencing:1_monitor_0 on node01
crmd[30009]: 2008/09/12_16:13:25 debug: run_graph: Transition 0: (Complete=0, Pending=4, Fired=4, Skipped=0, Incomplete=12)
crmd[30009]: 2008/09/12_16:13:25 debug: start_global_timer: Starting abort timer: 300000ms
pengine[30168]: 2008/09/12_16:13:25 info: process_pe_message: Transition 0: PEngine Input stored in: /var/lib/heartbeat/pengine/pe-input-0.bz2
pengine[30168]: 2008/09/12_16:13:25 info: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
lrmd[30006]: 2008/09/12_16:13:26 debug: on_msg_add_rsc:client [30009] adds resource dummy
crmd[30009]: 2008/09/12_16:13:26 info: do_lrm_rsc_op: Performing key=7:0:7:c2c5eded-35e0-433b-bdc7-39bfc5fdc0b0 op=dummy_monitor_0 )
lrmd[30006]: 2008/09/12_16:13:26 debug: on_msg_perform_op:2356: copying parameters for rsc dummy
lrmd[30006]: 2008/09/12_16:13:26 debug: on_msg_perform_op: add an operation operation monitor[2] on ocf::Dummy::dummy for client 30009, its parameters: CRM_meta_op_target_rc=[7] CRM_meta_timeout=[120000] crm_feature_set=[3.0]  to the operation list.
lrmd[30006]: 2008/09/12_16:13:26 info: rsc:dummy: monitor
lrmd[30006]: 2008/09/12_16:13:26 debug: on_msg_add_rsc:client [30009] adds resource prmFencing:0
lrmd[30006]: 2008/09/12_16:13:26 notice: lrmd_rsc_new(): No lrm_rprovider field in message
crmd[30009]: 2008/09/12_16:13:26 info: do_lrm_rsc_op: Performing key=8:0:7:c2c5eded-35e0-433b-bdc7-39bfc5fdc0b0 op=prmFencing:0_monitor_0 )
lrmd[30006]: 2008/09/12_16:13:26 debug: on_msg_perform_op:2356: copying parameters for rsc prmFencing:0
stonithd[30007]: 2008/09/12_16:13:26 debug: client STONITH_RA_EXEC_30213 (pid=30213) succeeded to signon to stonithd.
lrmd[30006]: 2008/09/12_16:13:26 debug: on_msg_perform_op: add an operation operation monitor[3] on stonith::external/sshTEST::prmFencing:0 for client 30009, its parameters: CRM_meta_op_target_rc=[7] extension=[-mente] hostlist=[node01,node02] CRM_meta_timeout=[120000] CRM_meta_clone_max=[2] crm_feature_set=[3.0] CRM_meta_globally_unique=[false] CRM_meta_clone=[0] CRM_meta_clone_node_max=[1]  to the operation list.
lrmd[30006]: 2008/09/12_16:13:26 info: rsc:prmFencing:0: monitor
lrmd[30213]: 2008/09/12_16:13:26 debug: stonithd_signon: creating connection
lrmd[30213]: 2008/09/12_16:13:26 debug: sending out the signon msg.
lrmd[30213]: 2008/09/12_16:13:26 debug: signed on to stonithd.
stonithd[30007]: 2008/09/12_16:13:26 debug: client STONITH_RA_EXEC_30213 [pid: 30213] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[30007]: 2008/09/12_16:13:26 debug: stonithRA_monitor: prmFencing:0 is not started.
stonithd[30007]: 2008/09/12_16:13:26 debug: Child process unknown_prmFencing:0_monitor [30214] exited, its exit code: 7 when signo=0.
stonithd[30007]: 2008/09/12_16:13:26 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=7
lrmd[30213]: 2008/09/12_16:13:26 debug: waiting for the stonithRA reply msg.
lrmd[30213]: 2008/09/12_16:13:26 debug: a stonith RA operation queue to run, call_id=30214.
lrmd[30213]: 2008/09/12_16:13:26 debug: stonithd_receive_ops_result: begin
stonithd[30007]: 2008/09/12_16:13:26 debug: client STONITH_RA_EXEC_30213 (pid=30213) signed off
lrmd[30006]: 2008/09/12_16:13:26 WARN: Managed prmFencing:0:monitor process 30213 exited with return code 7.
crmd[30009]: 2008/09/12_16:13:26 info: process_lrm_event: LRM operation prmFencing:0_monitor_0 (call=3, rc=7, cib-update=29, confirmed=true) complete not running
crmd[30009]: 2008/09/12_16:13:26 debug: te_update_diff: Processing diff (cib_modify): 0.7.3 -> 0.7.4 (S_TRANSITION_ENGINE)
crmd[30009]: 2008/09/12_16:13:26 info: match_graph_event: Action prmFencing:0_monitor_0 (8) confirmed on node02 (rc=0)
crmd[30009]: 2008/09/12_16:13:26 debug: get_xpath_object: No match for //cib/configuration
crmd[30009]: 2008/09/12_16:13:26 debug: get_xpath_object: No match for //cib/configuration
crmd[30009]: 2008/09/12_16:13:26 debug: run_graph: Transition 0: (Complete=1, Pending=3, Fired=0, Skipped=0, Incomplete=12)
Dummy[30207][30216]: 2008/09/12_16:13:26 DEBUG: dummy monitor : 7
lrmd[30006]: 2008/09/12_16:13:26 WARN: Managed dummy:monitor process 30207 exited with return code 7.
crmd[30009]: 2008/09/12_16:13:26 info: process_lrm_event: LRM operation dummy_monitor_0 (call=2, rc=7, cib-update=30, confirmed=true) complete not running
crmd[30009]: 2008/09/12_16:13:26 debug: te_update_diff: Processing diff (cib_modify): 0.7.4 -> 0.7.5 (S_TRANSITION_ENGINE)
crmd[30009]: 2008/09/12_16:13:26 info: match_graph_event: Action dummy_monitor_0 (7) confirmed on node02 (rc=0)
crmd[30009]: 2008/09/12_16:13:26 debug: get_xpath_object: No match for //cib/configuration
crmd[30009]: 2008/09/12_16:13:26 debug: get_xpath_object: No match for //cib/configuration
crmd[30009]: 2008/09/12_16:13:26 info: send_rsc_command: Initiating action 6: probe_complete probe_complete on node02
crmd[30009]: 2008/09/12_16:13:26 debug: send_rsc_command: Skipping wait for 6
crmd[30009]: 2008/09/12_16:13:26 debug: run_graph: Transition 0: (Complete=2, Pending=2, Fired=1, Skipped=0, Incomplete=11)
crmd[30009]: 2008/09/12_16:13:26 debug: start_global_timer: Starting abort timer: 300000ms
crmd[30009]: 2008/09/12_16:13:26 debug: run_graph: Transition 0: (Complete=3, Pending=2, Fired=0, Skipped=0, Incomplete=11)
cib[30005]: 2008/09/12_16:13:26 debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='780b5e1a-a0cd-45c0-a56a-dbb0938394ae']//nvpair[@name='probe_complete'] does not exist
cib[30005]: 2008/09/12_16:13:26 info: cib_process_xpath: Processing cib_query op for /cib (/cib)
crmd[30009]: 2008/09/12_16:13:27 debug: te_update_diff: Processing diff (cib_modify): 0.7.6 -> 0.7.7 (S_TRANSITION_ENGINE)
crmd[30009]: 2008/09/12_16:13:27 info: match_graph_event: Action prmFencing:1_monitor_0 (5) confirmed on node01 (rc=0)
crmd[30009]: 2008/09/12_16:13:27 debug: get_xpath_object: No match for //cib/configuration
crmd[30009]: 2008/09/12_16:13:27 debug: get_xpath_object: No match for //cib/configuration
crmd[30009]: 2008/09/12_16:13:27 debug: run_graph: Transition 0: (Complete=4, Pending=1, Fired=0, Skipped=0, Incomplete=11)
crmd[30009]: 2008/09/12_16:13:27 debug: te_update_diff: Processing diff (cib_modify): 0.7.7 -> 0.7.8 (S_TRANSITION_ENGINE)
crmd[30009]: 2008/09/12_16:13:27 info: match_graph_event: Action dummy_monitor_0 (4) confirmed on node01 (rc=0)
crmd[30009]: 2008/09/12_16:13:27 debug: get_xpath_object: No match for //cib/configuration
crmd[30009]: 2008/09/12_16:13:27 debug: get_xpath_object: No match for //cib/configuration
crmd[30009]: 2008/09/12_16:13:27 info: send_rsc_command: Initiating action 3: probe_complete probe_complete on node01
crmd[30009]: 2008/09/12_16:13:27 debug: send_rsc_command: Skipping wait for 3
crmd[30009]: 2008/09/12_16:13:27 debug: run_graph: Transition 0: (Complete=5, Pending=0, Fired=1, Skipped=0, Incomplete=10)
crmd[30009]: 2008/09/12_16:13:27 debug: start_global_timer: Starting abort timer: 300000ms
crmd[30009]: 2008/09/12_16:13:27 info: te_pseudo_action: Pseudo action 2 fired and confirmed
crmd[30009]: 2008/09/12_16:13:27 debug: run_graph: Transition 0: (Complete=6, Pending=0, Fired=1, Skipped=0, Incomplete=9)
crmd[30009]: 2008/09/12_16:13:27 debug: start_global_timer: Starting abort timer: 300000ms
crmd[30009]: 2008/09/12_16:13:27 info: send_rsc_command: Initiating action 9: start dummy_start_0 on node01
crmd[30009]: 2008/09/12_16:13:27 info: te_pseudo_action: Pseudo action 17 fired and confirmed
crmd[30009]: 2008/09/12_16:13:27 debug: run_graph: Transition 0: (Complete=7, Pending=1, Fired=2, Skipped=0, Incomplete=7)
crmd[30009]: 2008/09/12_16:13:27 debug: start_global_timer: Starting abort timer: 300000ms
crmd[30009]: 2008/09/12_16:13:27 info: te_pseudo_action: Pseudo action 15 fired and confirmed
crmd[30009]: 2008/09/12_16:13:27 debug: run_graph: Transition 0: (Complete=8, Pending=1, Fired=1, Skipped=0, Incomplete=6)
crmd[30009]: 2008/09/12_16:13:27 debug: start_global_timer: Starting abort timer: 300000ms
crmd[30009]: 2008/09/12_16:13:27 info: send_rsc_command: Initiating action 11: start prmFencing:0_start_0 on node02
crmd[30009]: 2008/09/12_16:13:27 info: send_rsc_command: Initiating action 13: start prmFencing:1_start_0 on node01
crmd[30009]: 2008/09/12_16:13:27 debug: run_graph: Transition 0: (Complete=9, Pending=3, Fired=2, Skipped=0, Incomplete=4)
crmd[30009]: 2008/09/12_16:13:27 debug: start_global_timer: Starting abort timer: 300000ms
crmd[30009]: 2008/09/12_16:13:27 info: do_lrm_rsc_op: Performing key=11:0:0:c2c5eded-35e0-433b-bdc7-39bfc5fdc0b0 op=prmFencing:0_start_0 )
lrmd[30006]: 2008/09/12_16:13:27 debug: on_msg_perform_op:2356: copying parameters for rsc prmFencing:0
lrmd[30006]: 2008/09/12_16:13:27 debug: on_msg_perform_op: add an operation operation start[4] on stonith::external/sshTEST::prmFencing:0 for client 30009, its parameters: extension=[-mente] CRM_meta_requires=[nothing] hostlist=[node01,node02] CRM_meta_timeout=[20000] CRM_meta_clone_max=[2] crm_feature_set=[3.0] CRM_meta_globally_unique=[false] CRM_meta_name=[start] CRM_meta_clone=[0] CRM_meta_clone_node_max=[1]  to the operation list.
lrmd[30006]: 2008/09/12_16:13:27 info: rsc:prmFencing:0: start
lrmd[30220]: 2008/09/12_16:13:27 debug: stonithd_signon: creating connection
lrmd[30220]: 2008/09/12_16:13:27 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:13:27 debug: client STONITH_RA_EXEC_30220 (pid=30220) succeeded to signon to stonithd.
lrmd[30220]: 2008/09/12_16:13:27 debug: signed on to stonithd.
lrmd[30220]: 2008/09/12_16:13:27 info: Try to start STONITH resource <rsc_id=prmFencing:0> : Device=external/sshTEST
lrmd[30220]: 2008/09/12_16:13:27 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:13:27 debug: client STONITH_RA_EXEC_30220 [pid: 30220] requests a resource operation start on prmFencing:0 (external/sshTEST)
stonithd[30007]: 2008/09/12_16:13:27 debug: external_set_config: called.
stonithd[30007]: 2008/09/12_16:13:27 debug: external_get_confignames: called.
stonithd[30007]: 2008/09/12_16:13:27 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST getconfignames'
stonithd[30007]: 2008/09/12_16:13:27 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST getconfignames' output: hostlist

stonithd[30007]: 2008/09/12_16:13:27 debug: external_get_confignames: 'sshTEST getconfignames' returned 0
stonithd[30007]: 2008/09/12_16:13:27 debug: external_get_confignames: sshTEST configname hostlist
stonithd[30227]: 2008/09/12_16:13:27 debug: external_status: called.
stonithd[30227]: 2008/09/12_16:13:27 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[30220]: 2008/09/12_16:13:27 debug: a stonith RA operation queue to run, call_id=30227.
lrmd[30220]: 2008/09/12_16:13:27 debug: stonithd_receive_ops_result: begin
stonithd[30227]: 2008/09/12_16:13:27 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[30227]: 2008/09/12_16:13:27 debug: external_status: running 'sshTEST status' returned 0
stonithd[30227]: 2008/09/12_16:13:27 debug: external_hostlist: called.
stonithd[30227]: 2008/09/12_16:13:27 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST gethosts'
stonithd[30227]: 2008/09/12_16:13:27 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST gethosts' output: node01
node02

stonithd[30227]: 2008/09/12_16:13:27 debug: external_hostlist: running 'sshTEST gethosts' returned 0
stonithd[30227]: 2008/09/12_16:13:27 debug: external_hostlist: sshTEST host node01
stonithd[30227]: 2008/09/12_16:13:27 debug: external_hostlist: sshTEST host node02
stonithd[30227]: 2008/09/12_16:13:27 debug: prmFencing:0 claims it can manage node node01
stonithd[30227]: 2008/09/12_16:13:27 debug: remove us (node02) from the host list for prmFencing:0
stonithd[30007]: 2008/09/12_16:13:27 debug: Child process external_prmFencing:0_start [30227] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:13:27 debug: prmFencing:0's (external/sshTEST) op start finished. op_result=0
lrmd[30006]: 2008/09/12_16:13:27 info: Managed prmFencing:0:start process 30220 exited with return code 0.
lrmd[30006]: 2008/09/12_16:13:27 debug: stonithRA plugin: provider attribute is not needed and will be ignored.
stonithd[30007]: 2008/09/12_16:13:27 debug: client STONITH_RA_EXEC_30220 (pid=30220) signed off
crmd[30009]: 2008/09/12_16:13:27 info: process_lrm_event: LRM operation prmFencing:0_start_0 (call=4, rc=0, cib-update=34, confirmed=true) complete ok
crmd[30009]: 2008/09/12_16:13:27 debug: te_update_diff: Processing diff (cib_modify): 0.7.8 -> 0.7.9 (S_TRANSITION_ENGINE)
crmd[30009]: 2008/09/12_16:13:27 info: match_graph_event: Action prmFencing:0_start_0 (11) confirmed on node02 (rc=0)
crmd[30009]: 2008/09/12_16:13:27 debug: get_xpath_object: No match for //cib/configuration
crmd[30009]: 2008/09/12_16:13:27 debug: get_xpath_object: No match for //cib/configuration
crmd[30009]: 2008/09/12_16:13:27 info: send_rsc_command: Initiating action 12: monitor prmFencing:0_monitor_5000 on node02
crmd[30009]: 2008/09/12_16:13:27 debug: run_graph: Transition 0: (Complete=10, Pending=3, Fired=1, Skipped=0, Incomplete=3)
crmd[30009]: 2008/09/12_16:13:27 debug: start_global_timer: Starting abort timer: 300000ms
crmd[30009]: 2008/09/12_16:13:28 info: do_lrm_rsc_op: Performing key=12:0:0:c2c5eded-35e0-433b-bdc7-39bfc5fdc0b0 op=prmFencing:0_monitor_5000 )
lrmd[30006]: 2008/09/12_16:13:28 debug: on_msg_perform_op: add an operation operation monitor[5] on stonith::external/sshTEST::prmFencing:0 for client 30009, its parameters: CRM_meta_interval=[5000] extension=[-mente] CRM_meta_requires=[nothing] hostlist=[node01,node02] CRM_meta_timeout=[20000] CRM_meta_clone_max=[2] crm_feature_set=[3.0] CRM_meta_globally_unique=[false] CRM_meta_name=[monitor] CRM_meta_clone=[0] CRM_meta_clone_node_max=[1]  to the operation list.
lrmd[30263]: 2008/09/12_16:13:28 debug: stonithd_signon: creating connection
lrmd[30263]: 2008/09/12_16:13:28 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:13:28 debug: client STONITH_RA_EXEC_30263 (pid=30263) succeeded to signon to stonithd.
lrmd[30263]: 2008/09/12_16:13:28 debug: signed on to stonithd.
stonithd[30007]: 2008/09/12_16:13:28 debug: client STONITH_RA_EXEC_30263 [pid: 30263] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[30264]: 2008/09/12_16:13:28 debug: external_status: called.
stonithd[30264]: 2008/09/12_16:13:28 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[30263]: 2008/09/12_16:13:28 debug: waiting for the stonithRA reply msg.
lrmd[30263]: 2008/09/12_16:13:28 debug: a stonith RA operation queue to run, call_id=30264.
lrmd[30263]: 2008/09/12_16:13:28 debug: stonithd_receive_ops_result: begin
stonithd[30264]: 2008/09/12_16:13:28 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[30264]: 2008/09/12_16:13:28 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:13:28 debug: Child process external_prmFencing:0_monitor [30264] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:13:28 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:13:28 debug: client STONITH_RA_EXEC_30263 (pid=30263) signed off
crmd[30009]: 2008/09/12_16:13:28 info: process_lrm_event: LRM operation prmFencing:0_monitor_5000 (call=5, rc=0, cib-update=35, confirmed=false) complete ok
crmd[30009]: 2008/09/12_16:13:28 debug: te_update_diff: Processing diff (cib_modify): 0.7.9 -> 0.7.10 (S_TRANSITION_ENGINE)
crmd[30009]: 2008/09/12_16:13:28 info: match_graph_event: Action prmFencing:0_monitor_5000 (12) confirmed on node02 (rc=0)
crmd[30009]: 2008/09/12_16:13:28 debug: get_xpath_object: No match for //cib/configuration
crmd[30009]: 2008/09/12_16:13:28 debug: get_xpath_object: No match for //cib/configuration
crmd[30009]: 2008/09/12_16:13:28 debug: run_graph: Transition 0: (Complete=11, Pending=2, Fired=0, Skipped=0, Incomplete=3)
crmd[30009]: 2008/09/12_16:13:28 debug: te_update_diff: Processing diff (cib_modify): 0.7.11 -> 0.7.12 (S_TRANSITION_ENGINE)
crmd[30009]: 2008/09/12_16:13:28 info: match_graph_event: Action dummy_start_0 (9) confirmed on node01 (rc=0)
crmd[30009]: 2008/09/12_16:13:28 debug: get_xpath_object: No match for //cib/configuration
crmd[30009]: 2008/09/12_16:13:28 debug: get_xpath_object: No match for //cib/configuration
crmd[30009]: 2008/09/12_16:13:28 info: send_rsc_command: Initiating action 10: monitor dummy_monitor_10000 on node01
crmd[30009]: 2008/09/12_16:13:28 debug: run_graph: Transition 0: (Complete=12, Pending=2, Fired=1, Skipped=0, Incomplete=2)
crmd[30009]: 2008/09/12_16:13:28 debug: start_global_timer: Starting abort timer: 300000ms
crmd[30009]: 2008/09/12_16:13:28 debug: te_update_diff: Processing diff (cib_modify): 0.7.12 -> 0.7.13 (S_TRANSITION_ENGINE)
crmd[30009]: 2008/09/12_16:13:28 info: match_graph_event: Action prmFencing:1_start_0 (13) confirmed on node01 (rc=0)
crmd[30009]: 2008/09/12_16:13:28 debug: get_xpath_object: No match for //cib/configuration
crmd[30009]: 2008/09/12_16:13:28 debug: get_xpath_object: No match for //cib/configuration
crmd[30009]: 2008/09/12_16:13:28 info: send_rsc_command: Initiating action 14: monitor prmFencing:1_monitor_5000 on node01
crmd[30009]: 2008/09/12_16:13:28 info: te_pseudo_action: Pseudo action 16 fired and confirmed
crmd[30009]: 2008/09/12_16:13:28 debug: run_graph: Transition 0: (Complete=13, Pending=2, Fired=2, Skipped=0, Incomplete=0)
crmd[30009]: 2008/09/12_16:13:28 debug: start_global_timer: Starting abort timer: 300000ms
crmd[30009]: 2008/09/12_16:13:28 debug: run_graph: Transition 0: (Complete=14, Pending=2, Fired=0, Skipped=0, Incomplete=0)
crmd[30009]: 2008/09/12_16:13:30 debug: te_update_diff: Processing diff (cib_modify): 0.7.13 -> 0.7.14 (S_TRANSITION_ENGINE)
crmd[30009]: 2008/09/12_16:13:30 info: match_graph_event: Action dummy_monitor_10000 (10) confirmed on node01 (rc=0)
crmd[30009]: 2008/09/12_16:13:30 debug: get_xpath_object: No match for //cib/configuration
crmd[30009]: 2008/09/12_16:13:30 debug: get_xpath_object: No match for //cib/configuration
crmd[30009]: 2008/09/12_16:13:30 debug: run_graph: Transition 0: (Complete=15, Pending=1, Fired=0, Skipped=0, Incomplete=0)
crmd[30009]: 2008/09/12_16:13:30 debug: te_update_diff: Processing diff (cib_modify): 0.7.14 -> 0.7.15 (S_TRANSITION_ENGINE)
crmd[30009]: 2008/09/12_16:13:30 info: match_graph_event: Action prmFencing:1_monitor_5000 (14) confirmed on node01 (rc=0)
crmd[30009]: 2008/09/12_16:13:30 debug: get_xpath_object: No match for //cib/configuration
crmd[30009]: 2008/09/12_16:13:30 debug: get_xpath_object: No match for //cib/configuration
crmd[30009]: 2008/09/12_16:13:30 debug: run_graph: ====================================================
crmd[30009]: 2008/09/12_16:13:30 info: run_graph: Transition 0: (Complete=16, Pending=0, Fired=0, Skipped=0, Incomplete=0)
crmd[30009]: 2008/09/12_16:13:30 info: te_graph_trigger: Transition 0 is now complete
crmd[30009]: 2008/09/12_16:13:30 debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE
crmd[30009]: 2008/09/12_16:13:30 info: notify_crmd: Transition 0 status: done - <null>
crmd[30009]: 2008/09/12_16:13:30 debug: s_crmd_fsa: Processing I_TE_SUCCESS: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
crmd[30009]: 2008/09/12_16:13:30 debug: do_fsa_action: actions:trace: 	// A_LOG   
crmd[30009]: 2008/09/12_16:13:30 info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
crmd[30009]: 2008/09/12_16:13:30 debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
crmd[30009]: 2008/09/12_16:13:30 debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
crmd[30009]: 2008/09/12_16:13:30 debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
lrmd[30290]: 2008/09/12_16:13:33 debug: stonithd_signon: creating connection
lrmd[30290]: 2008/09/12_16:13:33 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:13:33 debug: client STONITH_RA_EXEC_30290 (pid=30290) succeeded to signon to stonithd.
lrmd[30290]: 2008/09/12_16:13:33 debug: signed on to stonithd.
lrmd[30290]: 2008/09/12_16:13:33 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:13:33 debug: client STONITH_RA_EXEC_30290 [pid: 30290] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
lrmd[30290]: 2008/09/12_16:13:33 debug: a stonith RA operation queue to run, call_id=30291.
stonithd[30291]: 2008/09/12_16:13:33 debug: external_status: called.
lrmd[30290]: 2008/09/12_16:13:33 debug: stonithd_receive_ops_result: begin
stonithd[30291]: 2008/09/12_16:13:33 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
stonithd[30291]: 2008/09/12_16:13:33 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[30291]: 2008/09/12_16:13:33 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:13:33 debug: Child process external_prmFencing:0_monitor [30291] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:13:33 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:13:33 debug: client STONITH_RA_EXEC_30290 (pid=30290) signed off
lrmd[30314]: 2008/09/12_16:13:38 debug: stonithd_signon: creating connection
lrmd[30314]: 2008/09/12_16:13:38 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:13:38 debug: client STONITH_RA_EXEC_30314 (pid=30314) succeeded to signon to stonithd.
lrmd[30314]: 2008/09/12_16:13:38 debug: signed on to stonithd.
lrmd[30314]: 2008/09/12_16:13:38 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:13:38 debug: client STONITH_RA_EXEC_30314 [pid: 30314] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[30315]: 2008/09/12_16:13:38 debug: external_status: called.
stonithd[30315]: 2008/09/12_16:13:38 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[30314]: 2008/09/12_16:13:38 debug: a stonith RA operation queue to run, call_id=30315.
lrmd[30314]: 2008/09/12_16:13:38 debug: stonithd_receive_ops_result: begin
stonithd[30315]: 2008/09/12_16:13:38 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[30315]: 2008/09/12_16:13:38 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:13:38 debug: Child process external_prmFencing:0_monitor [30315] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:13:38 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:13:38 debug: client STONITH_RA_EXEC_30314 (pid=30314) signed off
lrmd[30341]: 2008/09/12_16:13:43 debug: stonithd_signon: creating connection
lrmd[30341]: 2008/09/12_16:13:43 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:13:43 debug: client STONITH_RA_EXEC_30341 (pid=30341) succeeded to signon to stonithd.
lrmd[30341]: 2008/09/12_16:13:43 debug: signed on to stonithd.
lrmd[30341]: 2008/09/12_16:13:43 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:13:43 debug: client STONITH_RA_EXEC_30341 [pid: 30341] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[30342]: 2008/09/12_16:13:43 debug: external_status: called.
stonithd[30342]: 2008/09/12_16:13:43 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[30341]: 2008/09/12_16:13:43 debug: a stonith RA operation queue to run, call_id=30342.
lrmd[30341]: 2008/09/12_16:13:43 debug: stonithd_receive_ops_result: begin
stonithd[30342]: 2008/09/12_16:13:43 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[30342]: 2008/09/12_16:13:43 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:13:43 debug: Child process external_prmFencing:0_monitor [30342] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:13:43 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:13:43 debug: client STONITH_RA_EXEC_30341 (pid=30341) signed off
lrmd[30365]: 2008/09/12_16:13:48 debug: stonithd_signon: creating connection
lrmd[30365]: 2008/09/12_16:13:48 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:13:48 debug: client STONITH_RA_EXEC_30365 (pid=30365) succeeded to signon to stonithd.
lrmd[30365]: 2008/09/12_16:13:48 debug: signed on to stonithd.
lrmd[30365]: 2008/09/12_16:13:48 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:13:48 debug: client STONITH_RA_EXEC_30365 [pid: 30365] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[30366]: 2008/09/12_16:13:48 debug: external_status: called.
stonithd[30366]: 2008/09/12_16:13:48 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[30365]: 2008/09/12_16:13:48 debug: a stonith RA operation queue to run, call_id=30366.
lrmd[30365]: 2008/09/12_16:13:48 debug: stonithd_receive_ops_result: begin
stonithd[30366]: 2008/09/12_16:13:48 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[30366]: 2008/09/12_16:13:48 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:13:48 debug: Child process external_prmFencing:0_monitor [30366] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:13:48 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:13:48 debug: client STONITH_RA_EXEC_30365 (pid=30365) signed off
lrmd[30392]: 2008/09/12_16:13:53 debug: stonithd_signon: creating connection
lrmd[30392]: 2008/09/12_16:13:53 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:13:53 debug: client STONITH_RA_EXEC_30392 (pid=30392) succeeded to signon to stonithd.
lrmd[30392]: 2008/09/12_16:13:53 debug: signed on to stonithd.
lrmd[30392]: 2008/09/12_16:13:53 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:13:53 debug: client STONITH_RA_EXEC_30392 [pid: 30392] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[30393]: 2008/09/12_16:13:53 debug: external_status: called.
stonithd[30393]: 2008/09/12_16:13:53 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[30392]: 2008/09/12_16:13:53 debug: a stonith RA operation queue to run, call_id=30393.
lrmd[30392]: 2008/09/12_16:13:53 debug: stonithd_receive_ops_result: begin
stonithd[30393]: 2008/09/12_16:13:53 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[30393]: 2008/09/12_16:13:53 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:13:53 debug: Child process external_prmFencing:0_monitor [30393] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:13:53 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:13:53 debug: client STONITH_RA_EXEC_30392 (pid=30392) signed off
lrmd[30417]: 2008/09/12_16:13:58 debug: stonithd_signon: creating connection
lrmd[30417]: 2008/09/12_16:13:58 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:13:58 debug: client STONITH_RA_EXEC_30417 (pid=30417) succeeded to signon to stonithd.
lrmd[30417]: 2008/09/12_16:13:58 debug: signed on to stonithd.
lrmd[30417]: 2008/09/12_16:13:58 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:13:58 debug: client STONITH_RA_EXEC_30417 [pid: 30417] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[30418]: 2008/09/12_16:13:58 debug: external_status: called.
stonithd[30418]: 2008/09/12_16:13:58 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[30417]: 2008/09/12_16:13:58 debug: a stonith RA operation queue to run, call_id=30418.
lrmd[30417]: 2008/09/12_16:13:58 debug: stonithd_receive_ops_result: begin
stonithd[30418]: 2008/09/12_16:13:58 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[30418]: 2008/09/12_16:13:58 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:13:58 debug: Child process external_prmFencing:0_monitor [30418] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:13:58 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:13:58 debug: client STONITH_RA_EXEC_30417 (pid=30417) signed off
lrmd[30441]: 2008/09/12_16:14:03 debug: stonithd_signon: creating connection
lrmd[30441]: 2008/09/12_16:14:03 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:14:03 debug: client STONITH_RA_EXEC_30441 (pid=30441) succeeded to signon to stonithd.
lrmd[30441]: 2008/09/12_16:14:03 debug: signed on to stonithd.
lrmd[30441]: 2008/09/12_16:14:03 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:14:03 debug: client STONITH_RA_EXEC_30441 [pid: 30441] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[30442]: 2008/09/12_16:14:03 debug: external_status: called.
stonithd[30442]: 2008/09/12_16:14:03 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[30441]: 2008/09/12_16:14:03 debug: a stonith RA operation queue to run, call_id=30442.
lrmd[30441]: 2008/09/12_16:14:03 debug: stonithd_receive_ops_result: begin
stonithd[30442]: 2008/09/12_16:14:04 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[30442]: 2008/09/12_16:14:04 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:14:04 debug: Child process external_prmFencing:0_monitor [30442] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:14:04 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:14:04 debug: client STONITH_RA_EXEC_30441 (pid=30441) signed off
lrmd[30469]: 2008/09/12_16:14:09 debug: stonithd_signon: creating connection
lrmd[30469]: 2008/09/12_16:14:09 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:14:09 debug: client STONITH_RA_EXEC_30469 (pid=30469) succeeded to signon to stonithd.
lrmd[30469]: 2008/09/12_16:14:09 debug: signed on to stonithd.
lrmd[30469]: 2008/09/12_16:14:09 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:14:09 debug: client STONITH_RA_EXEC_30469 [pid: 30469] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[30470]: 2008/09/12_16:14:09 debug: external_status: called.
stonithd[30470]: 2008/09/12_16:14:09 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[30469]: 2008/09/12_16:14:09 debug: a stonith RA operation queue to run, call_id=30470.
lrmd[30469]: 2008/09/12_16:14:09 debug: stonithd_receive_ops_result: begin
stonithd[30470]: 2008/09/12_16:14:09 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[30470]: 2008/09/12_16:14:09 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:14:09 debug: Child process external_prmFencing:0_monitor [30470] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:14:09 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:14:09 debug: client STONITH_RA_EXEC_30469 (pid=30469) signed off
lrmd[30493]: 2008/09/12_16:14:14 debug: stonithd_signon: creating connection
lrmd[30493]: 2008/09/12_16:14:14 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:14:14 debug: client STONITH_RA_EXEC_30493 (pid=30493) succeeded to signon to stonithd.
lrmd[30493]: 2008/09/12_16:14:14 debug: signed on to stonithd.
lrmd[30493]: 2008/09/12_16:14:14 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:14:14 debug: client STONITH_RA_EXEC_30493 [pid: 30493] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[30494]: 2008/09/12_16:14:14 debug: external_status: called.
stonithd[30494]: 2008/09/12_16:14:14 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[30493]: 2008/09/12_16:14:14 debug: a stonith RA operation queue to run, call_id=30494.
lrmd[30493]: 2008/09/12_16:14:14 debug: stonithd_receive_ops_result: begin
stonithd[30494]: 2008/09/12_16:14:14 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[30494]: 2008/09/12_16:14:14 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:14:14 debug: Child process external_prmFencing:0_monitor [30494] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:14:14 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:14:14 debug: client STONITH_RA_EXEC_30493 (pid=30493) signed off
lrmd[30520]: 2008/09/12_16:14:19 debug: stonithd_signon: creating connection
lrmd[30520]: 2008/09/12_16:14:19 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:14:19 debug: client STONITH_RA_EXEC_30520 (pid=30520) succeeded to signon to stonithd.
lrmd[30520]: 2008/09/12_16:14:19 debug: signed on to stonithd.
lrmd[30520]: 2008/09/12_16:14:19 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:14:19 debug: client STONITH_RA_EXEC_30520 [pid: 30520] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[30521]: 2008/09/12_16:14:19 debug: external_status: called.
stonithd[30521]: 2008/09/12_16:14:19 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[30520]: 2008/09/12_16:14:19 debug: a stonith RA operation queue to run, call_id=30521.
lrmd[30520]: 2008/09/12_16:14:19 debug: stonithd_receive_ops_result: begin
stonithd[30521]: 2008/09/12_16:14:19 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[30521]: 2008/09/12_16:14:19 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:14:19 debug: Child process external_prmFencing:0_monitor [30521] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:14:19 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:14:19 debug: client STONITH_RA_EXEC_30520 (pid=30520) signed off
crmd[30009]: 2008/09/12_16:14:20 debug: te_update_diff: Processing diff (cib_modify): 0.7.15 -> 0.7.16 (S_IDLE)
crmd[30009]: 2008/09/12_16:14:20 info: process_graph_event: Action dummy_monitor_10000 arrived after a completed transition
crmd[30009]: 2008/09/12_16:14:20 info: abort_transition_graph: process_graph_event:550 - Triggered graph processing (complete=1) : Inactive graph
crmd[30009]: 2008/09/12_16:14:20 debug: log_data_element: Dumping lrm_rsc_op
crmd[30009]: 2008/09/12_16:14:20 info: log_data_element: abort_transition_graph: Cause <lrm_rsc_op transition-magic="0:7;10:0:0:c2c5eded-35e0-433b-bdc7-39bfc5fdc0b0" rc-code="7" last-run="1221203658" last-rc-change="1221203658" exec-time="70" id="dummy_monitor_10000" />
crmd[30009]: 2008/09/12_16:14:20 WARN: update_failcount: Updating failcount for dummy on d0cced4b-51d6-4456-9a82-1a01cd8e6cc3 after failed monitor: rc=7 (update=value++, time=1221203660)
cib[30005]: 2008/09/12_16:14:20 debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='d0cced4b-51d6-4456-9a82-1a01cd8e6cc3']//nvpair[@name='fail-count-dummy'] does not exist
cib[30005]: 2008/09/12_16:14:20 info: cib_process_xpath: Processing cib_query op for /cib (/cib)
cib[30005]: 2008/09/12_16:14:20 debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='d0cced4b-51d6-4456-9a82-1a01cd8e6cc3']//nvpair[@name='last-failure-dummy'] does not exist
cib[30005]: 2008/09/12_16:14:20 info: cib_process_xpath: Processing cib_query op for /cib (/cib)
crmd[30009]: 2008/09/12_16:14:20 debug: get_xpath_object: No match for //cib/configuration
crmd[30009]: 2008/09/12_16:14:20 debug: get_xpath_object: No match for //cib/configuration
crmd[30009]: 2008/09/12_16:14:20 debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_IDLE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
crmd[30009]: 2008/09/12_16:14:20 info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
crmd[30009]: 2008/09/12_16:14:20 info: do_state_transition: All 2 cluster nodes are eligible to run resources.
crmd[30009]: 2008/09/12_16:14:20 debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
crmd[30009]: 2008/09/12_16:14:20 debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
crmd[30009]: 2008/09/12_16:14:20 debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
crmd[30009]: 2008/09/12_16:14:20 debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
crmd[30009]: 2008/09/12_16:14:20 debug: do_pe_invoke: Requesting the current CIB: S_POLICY_ENGINE
crmd[30009]: 2008/09/12_16:14:20 debug: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1221203660-28, seq=2, quorate=1
pengine[30168]: 2008/09/12_16:14:20 WARN: process_pe_message: Your current configuration only conforms to transitional-0.6
pengine[30168]: 2008/09/12_16:14:20 WARN: process_pe_message: Please use 'cibadmin --upgrade' to convert to pacemaker-0.7
pengine[30168]: 2008/09/12_16:14:20 debug: update_validation: Testing 'transitional-0.6' validation
pengine[30168]: 2008/09/12_16:14:20 notice: update_validation: Upgrading transitional-0.6-style configuration to pacemaker-0.7 with /usr/share/pacemaker/upgrade06.xsl
pengine[30168]: 2008/09/12_16:14:20 info: update_validation: Transformation /usr/share/pacemaker/upgrade06.xsl successful
pengine[30168]: 2008/09/12_16:14:20 notice: update_validation: Upgraded from transitional-0.6 to pacemaker-0.7 validation
pengine[30168]: 2008/09/12_16:14:20 WARN: process_pe_message: Your configuration was internally updated to pacemaker-0.7
pengine[30168]: 2008/09/12_16:14:20 debug: unpack_config: Default action timeout: 120s
pengine[30168]: 2008/09/12_16:14:20 debug: unpack_config: Default stickiness: 1000000
pengine[30168]: 2008/09/12_16:14:20 debug: unpack_config: Stop all active resources: false
pengine[30168]: 2008/09/12_16:14:20 debug: unpack_config: Default failure timeout: 0
pengine[30168]: 2008/09/12_16:14:20 debug: unpack_config: Default migration threshold: 0
pengine[30168]: 2008/09/12_16:14:20 debug: unpack_config: STONITH of failed nodes is enabled
pengine[30168]: 2008/09/12_16:14:20 debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
pengine[30168]: 2008/09/12_16:14:20 notice: unpack_config: On loss of CCM Quorum: Ignore
pengine[30168]: 2008/09/12_16:14:20 debug: get_xpath_object: No match for //cib/configuration/rsc_defaults
pengine[30168]: 2008/09/12_16:14:20 debug: get_xpath_object: No match for //cib/configuration/rsc_defaults
pengine[30168]: 2008/09/12_16:14:20 debug: get_xpath_object: No match for //cib/configuration/rsc_defaults
pengine[30168]: 2008/09/12_16:14:20 debug: get_xpath_object: No match for //cib/configuration/rsc_defaults
pengine[30168]: 2008/09/12_16:14:20 debug: get_xpath_object: No match for //cib/configuration/rsc_defaults
pengine[30168]: 2008/09/12_16:14:20 info: determine_online_status: Node node02 is online
pengine[30168]: 2008/09/12_16:14:20 debug: get_xpath_object: No match for //cib/configuration/op_defaults
pengine[30168]: 2008/09/12_16:14:20 info: determine_online_status: Node node01 is online
pengine[30168]: 2008/09/12_16:14:20 debug: get_xpath_object: No match for //cib/configuration/op_defaults
pengine[30168]: 2008/09/12_16:14:20 info: unpack_rsc_op: Remapping dummy_monitor_10000 (rc=7) on node01 to an ERROR (expected 0)
pengine[30168]: 2008/09/12_16:14:20 debug: get_xpath_object: No match for //cib/configuration/op_defaults
pengine[30168]: 2008/09/12_16:14:20 WARN: unpack_rsc_op: Processing failed op dummy_monitor_10000 on node01: Error
pengine[30168]: 2008/09/12_16:14:20 debug: get_xpath_object: No match for //cib/configuration/op_defaults
pengine[30168]: 2008/09/12_16:14:20 debug: get_xpath_object: No match for //cib/configuration/op_defaults
pengine[30168]: 2008/09/12_16:14:20 notice: native_print: dummy	(ocf::heartbeat:Dummy):	Started node01 FAILED
pengine[30168]: 2008/09/12_16:14:20 notice: clone_print: Clone Set: clnFencing
pengine[30168]: 2008/09/12_16:14:20 notice: native_print:     prmFencing:0	(stonith:external/sshTEST):	Started node02
pengine[30168]: 2008/09/12_16:14:20 notice: native_print:     prmFencing:1	(stonith:external/sshTEST):	Started node01
pengine[30168]: 2008/09/12_16:14:20 debug: common_apply_stickiness: Resource dummy: preferring current location (node=node01, weight=1000000)
pengine[30168]: 2008/09/12_16:14:20 info: get_failcount: dummy has failed 1 times on node01
pengine[30168]: 2008/09/12_16:14:20 debug: common_apply_stickiness: Resource prmFencing:1: preferring current location (node=node01, weight=1)
pengine[30168]: 2008/09/12_16:14:20 debug: common_apply_stickiness: Resource prmFencing:0: preferring current location (node=node02, weight=1)
pengine[30168]: 2008/09/12_16:14:20 debug: get_xpath_object: No match for //cib/configuration/op_defaults
pengine[30168]: 2008/09/12_16:14:20 debug: get_xpath_object: No match for //cib/configuration/op_defaults
pengine[30168]: 2008/09/12_16:14:20 debug: native_assign_node: Assigning node02 to dummy
pengine[30168]: 2008/09/12_16:14:20 debug: native_assign_node: Assigning node02 to prmFencing:0
pengine[30168]: 2008/09/12_16:14:20 debug: native_assign_node: All nodes for resource prmFencing:1 are unavailable, unclean or shutting down (node01: 0, -1000000)
pengine[30168]: 2008/09/12_16:14:20 WARN: native_color: Resource prmFencing:1 cannot run anywhere
pengine[30168]: 2008/09/12_16:14:20 debug: clone_color: Allocated 1 clnFencing instances of a possible 2
pengine[30168]: 2008/09/12_16:14:20 debug: get_xpath_object: No match for //cib/configuration/op_defaults
pengine[30168]: 2008/09/12_16:14:20 notice: NoRoleChange: Leave resource dummy	(Started node02)
pengine[30168]: 2008/09/12_16:14:20 notice: StopRsc:   node01	Stop dummy
pengine[30168]: 2008/09/12_16:14:20 notice: StartRsc:  node02	Start dummy
pengine[30168]: 2008/09/12_16:14:20 debug: get_xpath_object: No match for //cib/configuration/op_defaults
pengine[30168]: 2008/09/12_16:14:20 notice: RecurringOp:  Start recurring monitor (10s) for dummy on node02
pengine[30168]: 2008/09/12_16:14:20 debug: get_xpath_object: No match for //cib/configuration/op_defaults
pengine[30168]: 2008/09/12_16:14:20 notice: NoRoleChange: Leave resource prmFencing:0	(Started node02)
pengine[30168]: 2008/09/12_16:14:20 debug: get_xpath_object: No match for //cib/configuration/op_defaults
pengine[30168]: 2008/09/12_16:14:20 notice: NoRoleChange: Stop resource prmFencing:1	(Started node01)
pengine[30168]: 2008/09/12_16:14:20 debug: get_xpath_object: No match for //cib/configuration/op_defaults
pengine[30168]: 2008/09/12_16:14:20 notice: StopRsc:   node01	Stop prmFencing:1
pengine[30168]: 2008/09/12_16:14:20 debug: get_xpath_object: No match for //cib/configuration/op_defaults
pengine[30168]: 2008/09/12_16:14:20 debug: get_xpath_object: No match for //cib/configuration/op_defaults
pengine[30168]: 2008/09/12_16:14:20 debug: get_xpath_object: No match for //cib/configuration/op_defaults
pengine[30168]: 2008/09/12_16:14:20 debug: get_xpath_object: No match for //cib/configuration/op_defaults
pengine[30168]: 2008/09/12_16:14:20 WARN: stage6: Scheduling Node node01 for STONITH
pengine[30168]: 2008/09/12_16:14:20 WARN: native_stop_constraints: Stop of failed resource dummy is implicit after node01 is fenced
pengine[30168]: 2008/09/12_16:14:20 info: native_stop_constraints: prmFencing:1_stop_0 is implicit after node01 is fenced
pengine[30168]: 2008/09/12_16:14:20 notice: NoRoleChange: Leave resource prmFencing:0	(Started node02)
pengine[30168]: 2008/09/12_16:14:20 notice: NoRoleChange: Stop resource prmFencing:1	(Started node01)
pengine[30168]: 2008/09/12_16:14:20 notice: StopRsc:   node01	Stop prmFencing:1
pengine[30168]: 2008/09/12_16:14:20 debug: get_xpath_object: No match for //cib/configuration/op_defaults
pengine[30168]: 2008/09/12_16:14:20 debug: get_last_sequence: Series file /var/lib/heartbeat/pengine/pe-warn.last does not exist
crmd[30009]: 2008/09/12_16:14:20 debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
crmd[30009]: 2008/09/12_16:14:20 debug: do_fsa_action: actions:trace: 	// A_LOG   
crmd[30009]: 2008/09/12_16:14:20 info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
crmd[30009]: 2008/09/12_16:14:20 debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
crmd[30009]: 2008/09/12_16:14:20 debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
crmd[30009]: 2008/09/12_16:14:20 debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
crmd[30009]: 2008/09/12_16:14:20 debug: do_fsa_action: actions:trace: 	// A_TE_INVOKE
crmd[30009]: 2008/09/12_16:14:20 info: unpack_graph: Unpacked transition 1: 9 actions in 9 synapses
crmd[30009]: 2008/09/12_16:14:20 info: do_te_invoke: Processing graph 1 derived from /var/lib/heartbeat/pengine/pe-warn-0.bz2
crmd[30009]: 2008/09/12_16:14:20 debug: start_global_timer: Starting abort timer: 60000ms
crmd[30009]: 2008/09/12_16:14:20 debug: initiate_action: Action 15: Increasing IDLE timer to 240000
crmd[30009]: 2008/09/12_16:14:20 info: te_pseudo_action: Pseudo action 15 fired and confirmed
crmd[30009]: 2008/09/12_16:14:20 info: te_pseudo_action: Pseudo action 17 fired and confirmed
crmd[30009]: 2008/09/12_16:14:20 info: te_fence_node: Executing reboot fencing operation (18) on node01 (timeout=120000)
crmd[30009]: 2008/09/12_16:14:20 debug: te_connect_stonith: Still connected
crmd[30009]: 2008/09/12_16:14:20 debug: waiting for the stonith reply msg.
stonithd[30007]: 2008/09/12_16:14:20 info: client tengine [pid: 30009] requests a STONITH operation RESET on node node01
stonithd[30535]: 2008/09/12_16:14:20 debug: external_reset_req: called.
stonithd[30535]: 2008/09/12_16:14:20 debug: Host external-reset initiating on node01
stonithd[30535]: 2008/09/12_16:14:20 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST reset node01'
stonithd[30007]: 2008/09/12_16:14:20 info: stonith_operate_locally::2591: sending fencing op RESET for node01 to prmFencing:0 (external/sshTEST) (pid=30535)
stonithd[30007]: 2008/09/12_16:14:20 debug: inserted optype=RESET, key=30535
stonithd[30007]: 2008/09/12_16:14:20 debug: stonithd_node_fence: sent back a synchronous reply.
crmd[30009]: 2008/09/12_16:14:20 debug: stonithd_node_fence:563: Stonithd's synchronous answer is ST_APIOK
crmd[30009]: 2008/09/12_16:14:20 debug: run_graph: Transition 1: (Complete=0, Pending=1, Fired=3, Skipped=0, Incomplete=6)
crmd[30009]: 2008/09/12_16:14:20 debug: start_global_timer: Starting abort timer: 240000ms
crmd[30009]: 2008/09/12_16:14:20 info: te_pseudo_action: Pseudo action 12 fired and confirmed
crmd[30009]: 2008/09/12_16:14:20 info: te_pseudo_action: Pseudo action 16 fired and confirmed
crmd[30009]: 2008/09/12_16:14:20 debug: run_graph: Transition 1: (Complete=2, Pending=1, Fired=2, Skipped=0, Incomplete=4)
crmd[30009]: 2008/09/12_16:14:20 debug: start_global_timer: Starting abort timer: 240000ms
crmd[30009]: 2008/09/12_16:14:20 debug: run_graph: Transition 1: (Complete=4, Pending=1, Fired=0, Skipped=0, Incomplete=4)
pengine[30168]: 2008/09/12_16:14:20 WARN: process_pe_message: Transition 1: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/heartbeat/pengine/pe-warn-0.bz2
pengine[30168]: 2008/09/12_16:14:20 info: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
lrmd[30559]: 2008/09/12_16:14:24 debug: stonithd_signon: creating connection
lrmd[30559]: 2008/09/12_16:14:24 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:14:24 debug: client STONITH_RA_EXEC_30559 (pid=30559) succeeded to signon to stonithd.
lrmd[30559]: 2008/09/12_16:14:24 debug: signed on to stonithd.
lrmd[30559]: 2008/09/12_16:14:24 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:14:24 debug: client STONITH_RA_EXEC_30559 [pid: 30559] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[30560]: 2008/09/12_16:14:24 debug: external_status: called.
stonithd[30560]: 2008/09/12_16:14:24 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[30559]: 2008/09/12_16:14:24 debug: a stonith RA operation queue to run, call_id=30560.
lrmd[30559]: 2008/09/12_16:14:24 debug: stonithd_receive_ops_result: begin
stonithd[30560]: 2008/09/12_16:14:24 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[30560]: 2008/09/12_16:14:24 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:14:24 debug: Child process external_prmFencing:0_monitor [30560] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:14:24 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:14:24 debug: client STONITH_RA_EXEC_30559 (pid=30559) signed off
lrmd[30596]: 2008/09/12_16:14:29 debug: stonithd_signon: creating connection
lrmd[30596]: 2008/09/12_16:14:29 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:14:29 debug: client STONITH_RA_EXEC_30596 (pid=30596) succeeded to signon to stonithd.
lrmd[30596]: 2008/09/12_16:14:29 debug: signed on to stonithd.
lrmd[30596]: 2008/09/12_16:14:29 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:14:29 debug: client STONITH_RA_EXEC_30596 [pid: 30596] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[30597]: 2008/09/12_16:14:29 debug: external_status: called.
stonithd[30597]: 2008/09/12_16:14:29 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[30596]: 2008/09/12_16:14:29 debug: a stonith RA operation queue to run, call_id=30597.
lrmd[30596]: 2008/09/12_16:14:29 debug: stonithd_receive_ops_result: begin
stonithd[30597]: 2008/09/12_16:14:29 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[30597]: 2008/09/12_16:14:29 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:14:29 debug: Child process external_prmFencing:0_monitor [30597] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:14:29 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:14:29 debug: client STONITH_RA_EXEC_30596 (pid=30596) signed off
lrmd[30630]: 2008/09/12_16:14:34 debug: stonithd_signon: creating connection
lrmd[30630]: 2008/09/12_16:14:34 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:14:34 debug: client STONITH_RA_EXEC_30630 (pid=30630) succeeded to signon to stonithd.
lrmd[30630]: 2008/09/12_16:14:34 debug: signed on to stonithd.
lrmd[30630]: 2008/09/12_16:14:34 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:14:34 debug: client STONITH_RA_EXEC_30630 [pid: 30630] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[30631]: 2008/09/12_16:14:34 debug: external_status: called.
stonithd[30631]: 2008/09/12_16:14:34 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[30630]: 2008/09/12_16:14:34 debug: a stonith RA operation queue to run, call_id=30631.
lrmd[30630]: 2008/09/12_16:14:34 debug: stonithd_receive_ops_result: begin
stonithd[30631]: 2008/09/12_16:14:34 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[30631]: 2008/09/12_16:14:34 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:14:34 debug: Child process external_prmFencing:0_monitor [30631] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:14:34 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:14:35 debug: client STONITH_RA_EXEC_30630 (pid=30630) signed off
stonithd[30535]: 2008/09/12_16:14:37 info: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST reset node01' returned 256
stonithd[30535]: 2008/09/12_16:14:37 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST reset node01' output: 
stonithd[30535]: 2008/09/12_16:14:37 debug: external_reset_req: running 'sshTEST reset' returned 256
stonithd[30535]: 2008/09/12_16:14:37 CRIT: external_reset_req: 'sshTEST reset' for host node01 failed with rc 256
stonithd[30007]: 2008/09/12_16:14:37 debug: Child process external_prmFencing:0_1 [30535] exited, its exit code: 5 when signo=0.
stonithd[30007]: 2008/09/12_16:14:37 info: failed to STONITH node node01 with local device prmFencing:0 (exitcode 5), gonna try the next local device
stonithd[30007]: 2008/09/12_16:14:37 debug: get_local_stonithobj_can_stonith: begin_rsc_id donnot exist.
stonithd[30007]: 2008/09/12_16:14:37 debug: failed to STONITH node node01 locally
stonithd[30007]: 2008/09/12_16:14:37 debug: Will ask other nodes to help STONITH node node01.
stonithd[30007]: 2008/09/12_16:14:37 info: we can't manage node01, broadcast request to other nodes
stonithd[30007]: 2008/09/12_16:14:37 debug: changeto_remote_stonithop: removed optype=RESET, key=30535
stonithd[30007]: 2008/09/12_16:14:37 debug: changeto_remote_stonithop: inserted optype=RESET, key=30535
stonithd[30007]: 2008/09/12_16:14:37 debug: handle_msg_tstit: got T_STIT msg.
stonithd[30007]: 2008/09/12_16:14:37 debug: received a T_STITmsg from myself, ignoring
lrmd[30660]: 2008/09/12_16:14:40 debug: stonithd_signon: creating connection
lrmd[30660]: 2008/09/12_16:14:40 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:14:40 debug: client STONITH_RA_EXEC_30660 (pid=30660) succeeded to signon to stonithd.
lrmd[30660]: 2008/09/12_16:14:40 debug: signed on to stonithd.
lrmd[30660]: 2008/09/12_16:14:40 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:14:40 debug: client STONITH_RA_EXEC_30660 [pid: 30660] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[30661]: 2008/09/12_16:14:40 debug: external_status: called.
stonithd[30661]: 2008/09/12_16:14:40 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[30660]: 2008/09/12_16:14:40 debug: a stonith RA operation queue to run, call_id=30661.
lrmd[30660]: 2008/09/12_16:14:40 debug: stonithd_receive_ops_result: begin
stonithd[30661]: 2008/09/12_16:14:40 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[30661]: 2008/09/12_16:14:40 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:14:40 debug: Child process external_prmFencing:0_monitor [30661] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:14:40 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:14:40 debug: client STONITH_RA_EXEC_30660 (pid=30660) signed off
lrmd[30684]: 2008/09/12_16:14:45 debug: stonithd_signon: creating connection
lrmd[30684]: 2008/09/12_16:14:45 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:14:45 debug: client STONITH_RA_EXEC_30684 (pid=30684) succeeded to signon to stonithd.
lrmd[30684]: 2008/09/12_16:14:45 debug: signed on to stonithd.
lrmd[30684]: 2008/09/12_16:14:45 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:14:45 debug: client STONITH_RA_EXEC_30684 [pid: 30684] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[30685]: 2008/09/12_16:14:45 debug: external_status: called.
stonithd[30685]: 2008/09/12_16:14:45 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[30684]: 2008/09/12_16:14:45 debug: a stonith RA operation queue to run, call_id=30685.
lrmd[30684]: 2008/09/12_16:14:45 debug: stonithd_receive_ops_result: begin
stonithd[30685]: 2008/09/12_16:14:45 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[30685]: 2008/09/12_16:14:45 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:14:45 debug: Child process external_prmFencing:0_monitor [30685] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:14:45 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:14:45 debug: client STONITH_RA_EXEC_30684 (pid=30684) signed off
lrmd[30711]: 2008/09/12_16:14:50 debug: stonithd_signon: creating connection
lrmd[30711]: 2008/09/12_16:14:50 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:14:50 debug: client STONITH_RA_EXEC_30711 (pid=30711) succeeded to signon to stonithd.
lrmd[30711]: 2008/09/12_16:14:50 debug: signed on to stonithd.
lrmd[30711]: 2008/09/12_16:14:50 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:14:50 debug: client STONITH_RA_EXEC_30711 [pid: 30711] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[30712]: 2008/09/12_16:14:50 debug: external_status: called.
stonithd[30712]: 2008/09/12_16:14:50 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[30711]: 2008/09/12_16:14:50 debug: a stonith RA operation queue to run, call_id=30712.
lrmd[30711]: 2008/09/12_16:14:50 debug: stonithd_receive_ops_result: begin
stonithd[30712]: 2008/09/12_16:14:50 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[30712]: 2008/09/12_16:14:50 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:14:50 debug: Child process external_prmFencing:0_monitor [30712] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:14:50 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:14:50 debug: client STONITH_RA_EXEC_30711 (pid=30711) signed off
lrmd[30735]: 2008/09/12_16:14:55 debug: stonithd_signon: creating connection
lrmd[30735]: 2008/09/12_16:14:55 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:14:55 debug: client STONITH_RA_EXEC_30735 (pid=30735) succeeded to signon to stonithd.
lrmd[30735]: 2008/09/12_16:14:55 debug: signed on to stonithd.
lrmd[30735]: 2008/09/12_16:14:55 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:14:55 debug: client STONITH_RA_EXEC_30735 [pid: 30735] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[30736]: 2008/09/12_16:14:55 debug: external_status: called.
stonithd[30736]: 2008/09/12_16:14:55 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[30735]: 2008/09/12_16:14:55 debug: a stonith RA operation queue to run, call_id=30736.
lrmd[30735]: 2008/09/12_16:14:55 debug: stonithd_receive_ops_result: begin
stonithd[30736]: 2008/09/12_16:14:55 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[30736]: 2008/09/12_16:14:55 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:14:55 debug: Child process external_prmFencing:0_monitor [30736] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:14:55 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:14:55 debug: client STONITH_RA_EXEC_30735 (pid=30735) signed off
lrmd[30762]: 2008/09/12_16:15:00 debug: stonithd_signon: creating connection
lrmd[30762]: 2008/09/12_16:15:00 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:15:00 debug: client STONITH_RA_EXEC_30762 (pid=30762) succeeded to signon to stonithd.
lrmd[30762]: 2008/09/12_16:15:00 debug: signed on to stonithd.
lrmd[30762]: 2008/09/12_16:15:00 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:15:00 debug: client STONITH_RA_EXEC_30762 [pid: 30762] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[30763]: 2008/09/12_16:15:00 debug: external_status: called.
stonithd[30763]: 2008/09/12_16:15:00 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[30762]: 2008/09/12_16:15:00 debug: a stonith RA operation queue to run, call_id=30763.
lrmd[30762]: 2008/09/12_16:15:00 debug: stonithd_receive_ops_result: begin
stonithd[30763]: 2008/09/12_16:15:00 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[30763]: 2008/09/12_16:15:00 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:15:00 debug: Child process external_prmFencing:0_monitor [30763] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:15:00 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:15:00 debug: client STONITH_RA_EXEC_30762 (pid=30762) signed off
lrmd[30787]: 2008/09/12_16:15:05 debug: stonithd_signon: creating connection
lrmd[30787]: 2008/09/12_16:15:05 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:15:05 debug: client STONITH_RA_EXEC_30787 (pid=30787) succeeded to signon to stonithd.
lrmd[30787]: 2008/09/12_16:15:05 debug: signed on to stonithd.
lrmd[30787]: 2008/09/12_16:15:05 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:15:05 debug: client STONITH_RA_EXEC_30787 [pid: 30787] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[30788]: 2008/09/12_16:15:05 debug: external_status: called.
stonithd[30788]: 2008/09/12_16:15:05 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[30787]: 2008/09/12_16:15:05 debug: a stonith RA operation queue to run, call_id=30788.
lrmd[30787]: 2008/09/12_16:15:05 debug: stonithd_receive_ops_result: begin
stonithd[30788]: 2008/09/12_16:15:05 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[30788]: 2008/09/12_16:15:05 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:15:05 debug: Child process external_prmFencing:0_monitor [30788] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:15:05 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:15:05 debug: client STONITH_RA_EXEC_30787 (pid=30787) signed off
lrmd[30814]: 2008/09/12_16:15:10 debug: stonithd_signon: creating connection
lrmd[30814]: 2008/09/12_16:15:10 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:15:10 debug: client STONITH_RA_EXEC_30814 (pid=30814) succeeded to signon to stonithd.
lrmd[30814]: 2008/09/12_16:15:10 debug: signed on to stonithd.
lrmd[30814]: 2008/09/12_16:15:10 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:15:10 debug: client STONITH_RA_EXEC_30814 [pid: 30814] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[30815]: 2008/09/12_16:15:10 debug: external_status: called.
stonithd[30815]: 2008/09/12_16:15:10 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[30814]: 2008/09/12_16:15:10 debug: a stonith RA operation queue to run, call_id=30815.
lrmd[30814]: 2008/09/12_16:15:10 debug: stonithd_receive_ops_result: begin
stonithd[30815]: 2008/09/12_16:15:10 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[30815]: 2008/09/12_16:15:10 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:15:10 debug: Child process external_prmFencing:0_monitor [30815] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:15:10 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:15:10 debug: client STONITH_RA_EXEC_30814 (pid=30814) signed off
lrmd[30838]: 2008/09/12_16:15:15 debug: stonithd_signon: creating connection
lrmd[30838]: 2008/09/12_16:15:15 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:15:15 debug: client STONITH_RA_EXEC_30838 (pid=30838) succeeded to signon to stonithd.
lrmd[30838]: 2008/09/12_16:15:15 debug: signed on to stonithd.
lrmd[30838]: 2008/09/12_16:15:15 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:15:15 debug: client STONITH_RA_EXEC_30838 [pid: 30838] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[30839]: 2008/09/12_16:15:15 debug: external_status: called.
stonithd[30839]: 2008/09/12_16:15:15 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[30838]: 2008/09/12_16:15:15 debug: a stonith RA operation queue to run, call_id=30839.
lrmd[30838]: 2008/09/12_16:15:15 debug: stonithd_receive_ops_result: begin
stonithd[30839]: 2008/09/12_16:15:15 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[30839]: 2008/09/12_16:15:15 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:15:15 debug: Child process external_prmFencing:0_monitor [30839] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:15:15 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:15:15 debug: client STONITH_RA_EXEC_30838 (pid=30838) signed off
lrmd[30866]: 2008/09/12_16:15:20 debug: stonithd_signon: creating connection
lrmd[30866]: 2008/09/12_16:15:20 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:15:20 debug: client STONITH_RA_EXEC_30866 (pid=30866) succeeded to signon to stonithd.
lrmd[30866]: 2008/09/12_16:15:20 debug: signed on to stonithd.
lrmd[30866]: 2008/09/12_16:15:20 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:15:20 debug: client STONITH_RA_EXEC_30866 [pid: 30866] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
lrmd[30866]: 2008/09/12_16:15:20 debug: a stonith RA operation queue to run, call_id=30867.
stonithd[30867]: 2008/09/12_16:15:20 debug: external_status: called.
lrmd[30866]: 2008/09/12_16:15:20 debug: stonithd_receive_ops_result: begin
stonithd[30867]: 2008/09/12_16:15:20 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
stonithd[30867]: 2008/09/12_16:15:20 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[30867]: 2008/09/12_16:15:20 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:15:20 debug: Child process external_prmFencing:0_monitor [30867] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:15:20 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:15:20 debug: client STONITH_RA_EXEC_30866 (pid=30866) signed off
lrmd[30890]: 2008/09/12_16:15:25 debug: stonithd_signon: creating connection
lrmd[30890]: 2008/09/12_16:15:25 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:15:25 debug: client STONITH_RA_EXEC_30890 (pid=30890) succeeded to signon to stonithd.
lrmd[30890]: 2008/09/12_16:15:25 debug: signed on to stonithd.
stonithd[30007]: 2008/09/12_16:15:25 debug: client STONITH_RA_EXEC_30890 [pid: 30890] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
lrmd[30890]: 2008/09/12_16:15:25 debug: waiting for the stonithRA reply msg.
stonithd[30891]: 2008/09/12_16:15:25 debug: external_status: called.
lrmd[30890]: 2008/09/12_16:15:25 debug: a stonith RA operation queue to run, call_id=30891.
stonithd[30891]: 2008/09/12_16:15:25 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[30890]: 2008/09/12_16:15:25 debug: stonithd_receive_ops_result: begin
stonithd[30891]: 2008/09/12_16:15:25 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[30891]: 2008/09/12_16:15:25 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:15:25 debug: Child process external_prmFencing:0_monitor [30891] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:15:25 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:15:25 debug: client STONITH_RA_EXEC_30890 (pid=30890) signed off
lrmd[30916]: 2008/09/12_16:15:30 debug: stonithd_signon: creating connection
lrmd[30916]: 2008/09/12_16:15:30 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:15:30 debug: client STONITH_RA_EXEC_30916 (pid=30916) succeeded to signon to stonithd.
lrmd[30916]: 2008/09/12_16:15:30 debug: signed on to stonithd.
lrmd[30916]: 2008/09/12_16:15:30 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:15:30 debug: client STONITH_RA_EXEC_30916 [pid: 30916] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[30917]: 2008/09/12_16:15:30 debug: external_status: called.
stonithd[30917]: 2008/09/12_16:15:30 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[30916]: 2008/09/12_16:15:30 debug: a stonith RA operation queue to run, call_id=30917.
lrmd[30916]: 2008/09/12_16:15:30 debug: stonithd_receive_ops_result: begin
stonithd[30917]: 2008/09/12_16:15:31 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[30917]: 2008/09/12_16:15:31 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:15:31 debug: Child process external_prmFencing:0_monitor [30917] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:15:31 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:15:31 debug: client STONITH_RA_EXEC_30916 (pid=30916) signed off
lrmd[30941]: 2008/09/12_16:15:36 debug: stonithd_signon: creating connection
lrmd[30941]: 2008/09/12_16:15:36 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:15:36 debug: client STONITH_RA_EXEC_30941 (pid=30941) succeeded to signon to stonithd.
lrmd[30941]: 2008/09/12_16:15:36 debug: signed on to stonithd.
lrmd[30941]: 2008/09/12_16:15:36 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:15:36 debug: client STONITH_RA_EXEC_30941 [pid: 30941] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[30942]: 2008/09/12_16:15:36 debug: external_status: called.
stonithd[30942]: 2008/09/12_16:15:36 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[30941]: 2008/09/12_16:15:36 debug: a stonith RA operation queue to run, call_id=30942.
lrmd[30941]: 2008/09/12_16:15:36 debug: stonithd_receive_ops_result: begin
stonithd[30942]: 2008/09/12_16:15:36 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[30942]: 2008/09/12_16:15:36 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:15:36 debug: Child process external_prmFencing:0_monitor [30942] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:15:36 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:15:36 debug: client STONITH_RA_EXEC_30941 (pid=30941) signed off
lrmd[30965]: 2008/09/12_16:15:41 debug: stonithd_signon: creating connection
lrmd[30965]: 2008/09/12_16:15:41 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:15:41 debug: client STONITH_RA_EXEC_30965 (pid=30965) succeeded to signon to stonithd.
lrmd[30965]: 2008/09/12_16:15:41 debug: signed on to stonithd.
stonithd[30007]: 2008/09/12_16:15:41 debug: client STONITH_RA_EXEC_30965 [pid: 30965] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[30966]: 2008/09/12_16:15:41 debug: external_status: called.
stonithd[30966]: 2008/09/12_16:15:41 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[30965]: 2008/09/12_16:15:41 debug: waiting for the stonithRA reply msg.
lrmd[30965]: 2008/09/12_16:15:41 debug: a stonith RA operation queue to run, call_id=30966.
lrmd[30965]: 2008/09/12_16:15:41 debug: stonithd_receive_ops_result: begin
stonithd[30966]: 2008/09/12_16:15:41 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[30966]: 2008/09/12_16:15:41 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:15:41 debug: Child process external_prmFencing:0_monitor [30966] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:15:41 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:15:41 debug: client STONITH_RA_EXEC_30965 (pid=30965) signed off
lrmd[30992]: 2008/09/12_16:15:46 debug: stonithd_signon: creating connection
lrmd[30992]: 2008/09/12_16:15:46 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:15:46 debug: client STONITH_RA_EXEC_30992 (pid=30992) succeeded to signon to stonithd.
lrmd[30992]: 2008/09/12_16:15:46 debug: signed on to stonithd.
lrmd[30992]: 2008/09/12_16:15:46 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:15:46 debug: client STONITH_RA_EXEC_30992 [pid: 30992] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[30993]: 2008/09/12_16:15:46 debug: external_status: called.
stonithd[30993]: 2008/09/12_16:15:46 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[30992]: 2008/09/12_16:15:46 debug: a stonith RA operation queue to run, call_id=30993.
lrmd[30992]: 2008/09/12_16:15:46 debug: stonithd_receive_ops_result: begin
stonithd[30993]: 2008/09/12_16:15:46 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[30993]: 2008/09/12_16:15:46 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:15:46 debug: Child process external_prmFencing:0_monitor [30993] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:15:46 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:15:46 debug: client STONITH_RA_EXEC_30992 (pid=30992) signed off
lrmd[31016]: 2008/09/12_16:15:51 debug: stonithd_signon: creating connection
lrmd[31016]: 2008/09/12_16:15:51 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:15:51 debug: client STONITH_RA_EXEC_31016 (pid=31016) succeeded to signon to stonithd.
lrmd[31016]: 2008/09/12_16:15:51 debug: signed on to stonithd.
lrmd[31016]: 2008/09/12_16:15:51 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:15:51 debug: client STONITH_RA_EXEC_31016 [pid: 31016] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[31017]: 2008/09/12_16:15:51 debug: external_status: called.
stonithd[31017]: 2008/09/12_16:15:51 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[31016]: 2008/09/12_16:15:51 debug: a stonith RA operation queue to run, call_id=31017.
lrmd[31016]: 2008/09/12_16:15:51 debug: stonithd_receive_ops_result: begin
stonithd[31017]: 2008/09/12_16:15:51 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[31017]: 2008/09/12_16:15:51 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:15:51 debug: Child process external_prmFencing:0_monitor [31017] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:15:51 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:15:51 debug: client STONITH_RA_EXEC_31016 (pid=31016) signed off
lrmd[31043]: 2008/09/12_16:15:56 debug: stonithd_signon: creating connection
lrmd[31043]: 2008/09/12_16:15:56 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:15:56 debug: client STONITH_RA_EXEC_31043 (pid=31043) succeeded to signon to stonithd.
lrmd[31043]: 2008/09/12_16:15:56 debug: signed on to stonithd.
lrmd[31043]: 2008/09/12_16:15:56 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:15:56 debug: client STONITH_RA_EXEC_31043 [pid: 31043] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[31044]: 2008/09/12_16:15:56 debug: external_status: called.
stonithd[31044]: 2008/09/12_16:15:56 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[31043]: 2008/09/12_16:15:56 debug: a stonith RA operation queue to run, call_id=31044.
lrmd[31043]: 2008/09/12_16:15:56 debug: stonithd_receive_ops_result: begin
stonithd[31044]: 2008/09/12_16:15:56 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[31044]: 2008/09/12_16:15:56 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:15:56 debug: Child process external_prmFencing:0_monitor [31044] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:15:56 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:15:56 debug: client STONITH_RA_EXEC_31043 (pid=31043) signed off
lrmd[31067]: 2008/09/12_16:16:01 debug: stonithd_signon: creating connection
lrmd[31067]: 2008/09/12_16:16:01 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:16:01 debug: client STONITH_RA_EXEC_31067 (pid=31067) succeeded to signon to stonithd.
lrmd[31067]: 2008/09/12_16:16:01 debug: signed on to stonithd.
lrmd[31067]: 2008/09/12_16:16:01 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:16:01 debug: client STONITH_RA_EXEC_31067 [pid: 31067] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[31068]: 2008/09/12_16:16:01 debug: external_status: called.
stonithd[31068]: 2008/09/12_16:16:01 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[31067]: 2008/09/12_16:16:01 debug: a stonith RA operation queue to run, call_id=31068.
lrmd[31067]: 2008/09/12_16:16:01 debug: stonithd_receive_ops_result: begin
stonithd[31068]: 2008/09/12_16:16:01 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[31068]: 2008/09/12_16:16:01 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:16:01 debug: Child process external_prmFencing:0_monitor [31068] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:16:01 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:16:01 debug: client STONITH_RA_EXEC_31067 (pid=31067) signed off
lrmd[31094]: 2008/09/12_16:16:06 debug: stonithd_signon: creating connection
lrmd[31094]: 2008/09/12_16:16:06 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:16:06 debug: client STONITH_RA_EXEC_31094 (pid=31094) succeeded to signon to stonithd.
lrmd[31094]: 2008/09/12_16:16:06 debug: signed on to stonithd.
stonithd[30007]: 2008/09/12_16:16:06 debug: client STONITH_RA_EXEC_31094 [pid: 31094] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
lrmd[31094]: 2008/09/12_16:16:06 debug: waiting for the stonithRA reply msg.
stonithd[31095]: 2008/09/12_16:16:06 debug: external_status: called.
lrmd[31094]: 2008/09/12_16:16:06 debug: a stonith RA operation queue to run, call_id=31095.
stonithd[31095]: 2008/09/12_16:16:06 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[31094]: 2008/09/12_16:16:06 debug: stonithd_receive_ops_result: begin
stonithd[31095]: 2008/09/12_16:16:07 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[31095]: 2008/09/12_16:16:07 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:16:07 debug: Child process external_prmFencing:0_monitor [31095] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:16:07 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:16:07 debug: client STONITH_RA_EXEC_31094 (pid=31094) signed off
lrmd[31118]: 2008/09/12_16:16:12 debug: stonithd_signon: creating connection
lrmd[31118]: 2008/09/12_16:16:12 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:16:12 debug: client STONITH_RA_EXEC_31118 (pid=31118) succeeded to signon to stonithd.
lrmd[31118]: 2008/09/12_16:16:12 debug: signed on to stonithd.
lrmd[31118]: 2008/09/12_16:16:12 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:16:12 debug: client STONITH_RA_EXEC_31118 [pid: 31118] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
lrmd[31118]: 2008/09/12_16:16:12 debug: a stonith RA operation queue to run, call_id=31119.
stonithd[31119]: 2008/09/12_16:16:12 debug: external_status: called.
lrmd[31118]: 2008/09/12_16:16:12 debug: stonithd_receive_ops_result: begin
stonithd[31119]: 2008/09/12_16:16:12 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
stonithd[31119]: 2008/09/12_16:16:12 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[31119]: 2008/09/12_16:16:12 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:16:12 debug: Child process external_prmFencing:0_monitor [31119] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:16:12 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:16:12 debug: client STONITH_RA_EXEC_31118 (pid=31118) signed off
lrmd[31145]: 2008/09/12_16:16:17 debug: stonithd_signon: creating connection
lrmd[31145]: 2008/09/12_16:16:17 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:16:17 debug: client STONITH_RA_EXEC_31145 (pid=31145) succeeded to signon to stonithd.
lrmd[31145]: 2008/09/12_16:16:17 debug: signed on to stonithd.
lrmd[31145]: 2008/09/12_16:16:17 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:16:17 debug: client STONITH_RA_EXEC_31145 [pid: 31145] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[31146]: 2008/09/12_16:16:17 debug: external_status: called.
stonithd[31146]: 2008/09/12_16:16:17 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[31145]: 2008/09/12_16:16:17 debug: a stonith RA operation queue to run, call_id=31146.
lrmd[31145]: 2008/09/12_16:16:17 debug: stonithd_receive_ops_result: begin
stonithd[31146]: 2008/09/12_16:16:17 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[31146]: 2008/09/12_16:16:17 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:16:17 debug: Child process external_prmFencing:0_monitor [31146] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:16:17 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:16:17 debug: client STONITH_RA_EXEC_31145 (pid=31145) signed off
stonithd[30007]: 2008/09/12_16:16:20 ERROR: Failed to STONITH the node node01: optype=RESET, op_result=TIMEOUT
stonithd[30007]: 2008/09/12_16:16:20 debug: stonithop_result_to_local_client: succeed in sending back final result message.
crmd[30009]: 2008/09/12_16:16:20 debug: stonithd_receive_ops_result: begin
crmd[30009]: 2008/09/12_16:16:20 info: tengine_stonith_callback: call=30535, optype=1, node_name=node01, result=2, node_list=, action=18:1:0:c2c5eded-35e0-433b-bdc7-39bfc5fdc0b0
crmd[30009]: 2008/09/12_16:16:20 ERROR: tengine_stonith_callback: Stonith of node01 failed (2)... aborting transition.
crmd[30009]: 2008/09/12_16:16:20 info: abort_transition_graph: tengine_stonith_callback:248 - Triggered graph processing (complete=0) : Stonith failed
crmd[30009]: 2008/09/12_16:16:20 info: update_abort_priority: Abort priority upgraded from 0 to 1000000
crmd[30009]: 2008/09/12_16:16:20 info: update_abort_priority: Abort action done superceeded by restart
crmd[30009]: 2008/09/12_16:16:20 info: run_graph: ====================================================
crmd[30009]: 2008/09/12_16:16:20 notice: run_graph: Transition 1: (Complete=5, Pending=0, Fired=0, Skipped=4, Incomplete=0)
crmd[30009]: 2008/09/12_16:16:20 info: te_graph_trigger: Transition 1 is now complete
crmd[30009]: 2008/09/12_16:16:20 debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE
crmd[30009]: 2008/09/12_16:16:20 debug: notify_crmd: Transition 1 status: restart - Stonith failed
crmd[30009]: 2008/09/12_16:16:20 debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
crmd[30009]: 2008/09/12_16:16:20 info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
crmd[30009]: 2008/09/12_16:16:20 info: do_state_transition: All 2 cluster nodes are eligible to run resources.
crmd[30009]: 2008/09/12_16:16:20 debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
crmd[30009]: 2008/09/12_16:16:20 debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
crmd[30009]: 2008/09/12_16:16:20 debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
crmd[30009]: 2008/09/12_16:16:20 debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
crmd[30009]: 2008/09/12_16:16:20 debug: do_pe_invoke: Requesting the current CIB: S_POLICY_ENGINE
crmd[30009]: 2008/09/12_16:16:20 debug: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1221203780-29, seq=2, quorate=1
pengine[30168]: 2008/09/12_16:16:20 WARN: process_pe_message: Your current configuration only conforms to transitional-0.6
pengine[30168]: 2008/09/12_16:16:20 WARN: process_pe_message: Please use 'cibadmin --upgrade' to convert to pacemaker-0.7
pengine[30168]: 2008/09/12_16:16:20 debug: update_validation: Testing 'transitional-0.6' validation
pengine[30168]: 2008/09/12_16:16:20 notice: update_validation: Upgrading transitional-0.6-style configuration to pacemaker-0.7 with /usr/share/pacemaker/upgrade06.xsl
pengine[30168]: 2008/09/12_16:16:20 info: update_validation: Transformation /usr/share/pacemaker/upgrade06.xsl successful
pengine[30168]: 2008/09/12_16:16:20 notice: update_validation: Upgraded from transitional-0.6 to pacemaker-0.7 validation
pengine[30168]: 2008/09/12_16:16:20 WARN: process_pe_message: Your configuration was internally updated to pacemaker-0.7
pengine[30168]: 2008/09/12_16:16:20 debug: unpack_config: Default action timeout: 120s
pengine[30168]: 2008/09/12_16:16:20 debug: unpack_config: Default stickiness: 1000000
pengine[30168]: 2008/09/12_16:16:20 debug: unpack_config: Stop all active resources: false
pengine[30168]: 2008/09/12_16:16:20 debug: unpack_config: Default failure timeout: 0
pengine[30168]: 2008/09/12_16:16:20 debug: unpack_config: Default migration threshold: 0
pengine[30168]: 2008/09/12_16:16:20 debug: unpack_config: STONITH of failed nodes is enabled
pengine[30168]: 2008/09/12_16:16:20 debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
pengine[30168]: 2008/09/12_16:16:20 notice: unpack_config: On loss of CCM Quorum: Ignore
pengine[30168]: 2008/09/12_16:16:20 debug: get_xpath_object: No match for //cib/configuration/rsc_defaults
pengine[30168]: 2008/09/12_16:16:20 debug: get_xpath_object: No match for //cib/configuration/rsc_defaults
pengine[30168]: 2008/09/12_16:16:20 debug: get_xpath_object: No match for //cib/configuration/rsc_defaults
pengine[30168]: 2008/09/12_16:16:20 debug: get_xpath_object: No match for //cib/configuration/rsc_defaults
pengine[30168]: 2008/09/12_16:16:20 debug: get_xpath_object: No match for //cib/configuration/rsc_defaults
pengine[30168]: 2008/09/12_16:16:20 info: determine_online_status: Node node02 is online
pengine[30168]: 2008/09/12_16:16:20 debug: get_xpath_object: No match for //cib/configuration/op_defaults
pengine[30168]: 2008/09/12_16:16:20 info: determine_online_status: Node node01 is online
pengine[30168]: 2008/09/12_16:16:20 debug: get_xpath_object: No match for //cib/configuration/op_defaults
pengine[30168]: 2008/09/12_16:16:20 info: unpack_rsc_op: Remapping dummy_monitor_10000 (rc=7) on node01 to an ERROR (expected 0)
pengine[30168]: 2008/09/12_16:16:20 debug: get_xpath_object: No match for //cib/configuration/op_defaults
pengine[30168]: 2008/09/12_16:16:20 WARN: unpack_rsc_op: Processing failed op dummy_monitor_10000 on node01: Error
pengine[30168]: 2008/09/12_16:16:20 debug: get_xpath_object: No match for //cib/configuration/op_defaults
pengine[30168]: 2008/09/12_16:16:20 debug: get_xpath_object: No match for //cib/configuration/op_defaults
pengine[30168]: 2008/09/12_16:16:20 notice: native_print: dummy	(ocf::heartbeat:Dummy):	Started node01 FAILED
pengine[30168]: 2008/09/12_16:16:20 notice: clone_print: Clone Set: clnFencing
pengine[30168]: 2008/09/12_16:16:20 notice: native_print:     prmFencing:0	(stonith:external/sshTEST):	Started node02
pengine[30168]: 2008/09/12_16:16:20 notice: native_print:     prmFencing:1	(stonith:external/sshTEST):	Started node01
pengine[30168]: 2008/09/12_16:16:20 debug: common_apply_stickiness: Resource dummy: preferring current location (node=node01, weight=1000000)
pengine[30168]: 2008/09/12_16:16:20 info: get_failcount: dummy has failed 1 times on node01
pengine[30168]: 2008/09/12_16:16:20 debug: common_apply_stickiness: Resource prmFencing:1: preferring current location (node=node01, weight=1)
pengine[30168]: 2008/09/12_16:16:20 debug: common_apply_stickiness: Resource prmFencing:0: preferring current location (node=node02, weight=1)
pengine[30168]: 2008/09/12_16:16:20 debug: get_xpath_object: No match for //cib/configuration/op_defaults
crmd[30009]: 2008/09/12_16:16:20 debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
stonithd[30007]: 2008/09/12_16:16:20 info: client tengine [pid: 30009] requests a STONITH operation RESET on node node01
pengine[30168]: 2008/09/12_16:16:20 debug: get_xpath_object: No match for //cib/configuration/op_defaults
crmd[30009]: 2008/09/12_16:16:20 debug: do_fsa_action: actions:trace: 	// A_LOG   
stonithd[31167]: 2008/09/12_16:16:20 debug: external_reset_req: called.
pengine[30168]: 2008/09/12_16:16:20 debug: native_assign_node: Assigning node02 to dummy
crmd[30009]: 2008/09/12_16:16:20 info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
stonithd[31167]: 2008/09/12_16:16:20 debug: Host external-reset initiating on node01
pengine[30168]: 2008/09/12_16:16:20 debug: native_assign_node: Assigning node02 to prmFencing:0
crmd[30009]: 2008/09/12_16:16:20 debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
stonithd[31167]: 2008/09/12_16:16:20 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST reset node01'
pengine[30168]: 2008/09/12_16:16:20 debug: native_assign_node: All nodes for resource prmFencing:1 are unavailable, unclean or shutting down (node01: 0, -1000000)
crmd[30009]: 2008/09/12_16:16:20 debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
stonithd[30007]: 2008/09/12_16:16:20 info: stonith_operate_locally::2591: sending fencing op RESET for node01 to prmFencing:0 (external/sshTEST) (pid=31167)
pengine[30168]: 2008/09/12_16:16:20 WARN: native_color: Resource prmFencing:1 cannot run anywhere
crmd[30009]: 2008/09/12_16:16:20 debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
stonithd[30007]: 2008/09/12_16:16:20 debug: inserted optype=RESET, key=31167
pengine[30168]: 2008/09/12_16:16:20 debug: clone_color: Allocated 1 clnFencing instances of a possible 2
crmd[30009]: 2008/09/12_16:16:20 debug: do_fsa_action: actions:trace: 	// A_TE_INVOKE
stonithd[30007]: 2008/09/12_16:16:20 debug: stonithd_node_fence: sent back a synchronous reply.
pengine[30168]: 2008/09/12_16:16:20 debug: get_xpath_object: No match for //cib/configuration/op_defaults
crmd[30009]: 2008/09/12_16:16:20 info: unpack_graph: Unpacked transition 2: 9 actions in 9 synapses
pengine[30168]: 2008/09/12_16:16:20 notice: NoRoleChange: Leave resource dummy	(Started node02)
crmd[30009]: 2008/09/12_16:16:20 info: do_te_invoke: Processing graph 2 derived from /var/lib/heartbeat/pengine/pe-warn-1.bz2
pengine[30168]: 2008/09/12_16:16:20 notice: StopRsc:   node01	Stop dummy
crmd[30009]: 2008/09/12_16:16:20 debug: start_global_timer: Starting abort timer: 60000ms
pengine[30168]: 2008/09/12_16:16:20 notice: StartRsc:  node02	Start dummy
crmd[30009]: 2008/09/12_16:16:20 debug: initiate_action: Action 15: Increasing IDLE timer to 240000
pengine[30168]: 2008/09/12_16:16:20 debug: get_xpath_object: No match for //cib/configuration/op_defaults
crmd[30009]: 2008/09/12_16:16:20 info: te_pseudo_action: Pseudo action 15 fired and confirmed
pengine[30168]: 2008/09/12_16:16:20 notice: RecurringOp:  Start recurring monitor (10s) for dummy on node02
crmd[30009]: 2008/09/12_16:16:20 info: te_pseudo_action: Pseudo action 17 fired and confirmed
pengine[30168]: 2008/09/12_16:16:20 debug: get_xpath_object: No match for //cib/configuration/op_defaults
crmd[30009]: 2008/09/12_16:16:20 info: te_fence_node: Executing reboot fencing operation (18) on node01 (timeout=120000)
pengine[30168]: 2008/09/12_16:16:20 notice: NoRoleChange: Leave resource prmFencing:0	(Started node02)
crmd[30009]: 2008/09/12_16:16:20 debug: te_connect_stonith: Still connected
pengine[30168]: 2008/09/12_16:16:20 debug: get_xpath_object: No match for //cib/configuration/op_defaults
crmd[30009]: 2008/09/12_16:16:20 debug: waiting for the stonith reply msg.
pengine[30168]: 2008/09/12_16:16:20 notice: NoRoleChange: Stop resource prmFencing:1	(Started node01)
pengine[30168]: 2008/09/12_16:16:20 debug: get_xpath_object: No match for //cib/configuration/op_defaults
pengine[30168]: 2008/09/12_16:16:20 notice: StopRsc:   node01	Stop prmFencing:1
pengine[30168]: 2008/09/12_16:16:20 debug: get_xpath_object: No match for //cib/configuration/op_defaults
pengine[30168]: 2008/09/12_16:16:20 debug: get_xpath_object: No match for //cib/configuration/op_defaults
pengine[30168]: 2008/09/12_16:16:20 debug: get_xpath_object: No match for //cib/configuration/op_defaults
pengine[30168]: 2008/09/12_16:16:20 debug: get_xpath_object: No match for //cib/configuration/op_defaults
pengine[30168]: 2008/09/12_16:16:20 WARN: stage6: Scheduling Node node01 for STONITH
pengine[30168]: 2008/09/12_16:16:20 WARN: native_stop_constraints: Stop of failed resource dummy is implicit after node01 is fenced
pengine[30168]: 2008/09/12_16:16:20 info: native_stop_constraints: prmFencing:1_stop_0 is implicit after node01 is fenced
pengine[30168]: 2008/09/12_16:16:20 notice: NoRoleChange: Leave resource prmFencing:0	(Started node02)
pengine[30168]: 2008/09/12_16:16:20 notice: NoRoleChange: Stop resource prmFencing:1	(Started node01)
pengine[30168]: 2008/09/12_16:16:20 notice: StopRsc:   node01	Stop prmFencing:1
pengine[30168]: 2008/09/12_16:16:20 debug: get_xpath_object: No match for //cib/configuration/op_defaults
crmd[30009]: 2008/09/12_16:16:20 debug: stonithd_node_fence:563: Stonithd's synchronous answer is ST_APIOK
crmd[30009]: 2008/09/12_16:16:20 debug: run_graph: Transition 2: (Complete=0, Pending=1, Fired=3, Skipped=0, Incomplete=6)
crmd[30009]: 2008/09/12_16:16:20 debug: start_global_timer: Starting abort timer: 240000ms
crmd[30009]: 2008/09/12_16:16:20 info: te_pseudo_action: Pseudo action 12 fired and confirmed
crmd[30009]: 2008/09/12_16:16:20 info: te_pseudo_action: Pseudo action 16 fired and confirmed
crmd[30009]: 2008/09/12_16:16:20 debug: run_graph: Transition 2: (Complete=2, Pending=1, Fired=2, Skipped=0, Incomplete=4)
crmd[30009]: 2008/09/12_16:16:20 debug: start_global_timer: Starting abort timer: 240000ms
crmd[30009]: 2008/09/12_16:16:20 debug: run_graph: Transition 2: (Complete=4, Pending=1, Fired=0, Skipped=0, Incomplete=4)
pengine[30168]: 2008/09/12_16:16:20 WARN: process_pe_message: Transition 2: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/heartbeat/pengine/pe-warn-1.bz2
pengine[30168]: 2008/09/12_16:16:20 info: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
lrmd[31181]: 2008/09/12_16:16:22 debug: stonithd_signon: creating connection
lrmd[31181]: 2008/09/12_16:16:22 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:16:22 debug: client STONITH_RA_EXEC_31181 (pid=31181) succeeded to signon to stonithd.
lrmd[31181]: 2008/09/12_16:16:22 debug: signed on to stonithd.
lrmd[31181]: 2008/09/12_16:16:22 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:16:22 debug: client STONITH_RA_EXEC_31181 [pid: 31181] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
lrmd[31181]: 2008/09/12_16:16:22 debug: a stonith RA operation queue to run, call_id=31182.
stonithd[31182]: 2008/09/12_16:16:22 debug: external_status: called.
lrmd[31181]: 2008/09/12_16:16:22 debug: stonithd_receive_ops_result: begin
stonithd[31182]: 2008/09/12_16:16:22 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
stonithd[31182]: 2008/09/12_16:16:22 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[31182]: 2008/09/12_16:16:22 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:16:22 debug: Child process external_prmFencing:0_monitor [31182] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:16:22 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:16:22 debug: client STONITH_RA_EXEC_31181 (pid=31181) signed off
lrmd[31218]: 2008/09/12_16:16:27 debug: stonithd_signon: creating connection
stonithd[30007]: 2008/09/12_16:16:27 debug: client STONITH_RA_EXEC_31218 (pid=31218) succeeded to signon to stonithd.
lrmd[31218]: 2008/09/12_16:16:27 debug: sending out the signon msg.
lrmd[31218]: 2008/09/12_16:16:27 debug: signed on to stonithd.
lrmd[31218]: 2008/09/12_16:16:27 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:16:27 debug: client STONITH_RA_EXEC_31218 [pid: 31218] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[31219]: 2008/09/12_16:16:27 debug: external_status: called.
stonithd[31219]: 2008/09/12_16:16:27 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[31218]: 2008/09/12_16:16:27 debug: a stonith RA operation queue to run, call_id=31219.
lrmd[31218]: 2008/09/12_16:16:27 debug: stonithd_receive_ops_result: begin
stonithd[31219]: 2008/09/12_16:16:27 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[31219]: 2008/09/12_16:16:27 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:16:27 debug: Child process external_prmFencing:0_monitor [31219] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:16:27 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:16:27 debug: client STONITH_RA_EXEC_31218 (pid=31218) signed off
lrmd[31252]: 2008/09/12_16:16:32 debug: stonithd_signon: creating connection
lrmd[31252]: 2008/09/12_16:16:32 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:16:32 debug: client STONITH_RA_EXEC_31252 (pid=31252) succeeded to signon to stonithd.
lrmd[31252]: 2008/09/12_16:16:32 debug: signed on to stonithd.
lrmd[31252]: 2008/09/12_16:16:32 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:16:32 debug: client STONITH_RA_EXEC_31252 [pid: 31252] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
lrmd[31252]: 2008/09/12_16:16:32 debug: a stonith RA operation queue to run, call_id=31253.
lrmd[31252]: 2008/09/12_16:16:32 debug: stonithd_receive_ops_result: begin
stonithd[31253]: 2008/09/12_16:16:32 debug: external_status: called.
stonithd[31253]: 2008/09/12_16:16:32 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
stonithd[31253]: 2008/09/12_16:16:32 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[31253]: 2008/09/12_16:16:32 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:16:32 debug: Child process external_prmFencing:0_monitor [31253] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:16:32 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:16:32 debug: client STONITH_RA_EXEC_31252 (pid=31252) signed off
stonithd[31167]: 2008/09/12_16:16:35 info: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST reset node01' returned 256
stonithd[31167]: 2008/09/12_16:16:35 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST reset node01' output: 
stonithd[31167]: 2008/09/12_16:16:35 debug: external_reset_req: running 'sshTEST reset' returned 256
stonithd[31167]: 2008/09/12_16:16:35 CRIT: external_reset_req: 'sshTEST reset' for host node01 failed with rc 256
stonithd[30007]: 2008/09/12_16:16:35 debug: Child process external_prmFencing:0_1 [31167] exited, its exit code: 5 when signo=0.
stonithd[30007]: 2008/09/12_16:16:35 info: failed to STONITH node node01 with local device prmFencing:0 (exitcode 5), gonna try the next local device
stonithd[30007]: 2008/09/12_16:16:35 debug: get_local_stonithobj_can_stonith: begin_rsc_id donnot exist.
stonithd[30007]: 2008/09/12_16:16:35 debug: failed to STONITH node node01 locally
stonithd[30007]: 2008/09/12_16:16:35 debug: Will ask other nodes to help STONITH node node01.
stonithd[30007]: 2008/09/12_16:16:35 info: we can't manage node01, broadcast request to other nodes
stonithd[30007]: 2008/09/12_16:16:35 debug: changeto_remote_stonithop: removed optype=RESET, key=31167
stonithd[30007]: 2008/09/12_16:16:35 debug: changeto_remote_stonithop: inserted optype=RESET, key=31167
stonithd[30007]: 2008/09/12_16:16:36 debug: handle_msg_tstit: got T_STIT msg.
stonithd[30007]: 2008/09/12_16:16:36 debug: received a T_STITmsg from myself, ignoring
lrmd[31287]: 2008/09/12_16:16:37 debug: stonithd_signon: creating connection
lrmd[31287]: 2008/09/12_16:16:37 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:16:37 debug: client STONITH_RA_EXEC_31287 (pid=31287) succeeded to signon to stonithd.
lrmd[31287]: 2008/09/12_16:16:37 debug: signed on to stonithd.
lrmd[31287]: 2008/09/12_16:16:37 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:16:37 debug: client STONITH_RA_EXEC_31287 [pid: 31287] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
lrmd[31287]: 2008/09/12_16:16:37 debug: a stonith RA operation queue to run, call_id=31288.
stonithd[31288]: 2008/09/12_16:16:37 debug: external_status: called.
lrmd[31287]: 2008/09/12_16:16:37 debug: stonithd_receive_ops_result: begin
stonithd[31288]: 2008/09/12_16:16:37 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
stonithd[31288]: 2008/09/12_16:16:37 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[31288]: 2008/09/12_16:16:37 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:16:37 debug: Child process external_prmFencing:0_monitor [31288] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:16:37 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:16:37 debug: client STONITH_RA_EXEC_31287 (pid=31287) signed off
lrmd[31311]: 2008/09/12_16:16:42 debug: stonithd_signon: creating connection
lrmd[31311]: 2008/09/12_16:16:42 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:16:42 debug: client STONITH_RA_EXEC_31311 (pid=31311) succeeded to signon to stonithd.
lrmd[31311]: 2008/09/12_16:16:42 debug: signed on to stonithd.
lrmd[31311]: 2008/09/12_16:16:42 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:16:42 debug: client STONITH_RA_EXEC_31311 [pid: 31311] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[31312]: 2008/09/12_16:16:42 debug: external_status: called.
stonithd[31312]: 2008/09/12_16:16:42 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[31311]: 2008/09/12_16:16:42 debug: a stonith RA operation queue to run, call_id=31312.
lrmd[31311]: 2008/09/12_16:16:42 debug: stonithd_receive_ops_result: begin
stonithd[31312]: 2008/09/12_16:16:42 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[31312]: 2008/09/12_16:16:42 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:16:42 debug: Child process external_prmFencing:0_monitor [31312] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:16:42 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:16:42 debug: client STONITH_RA_EXEC_31311 (pid=31311) signed off
lrmd[31376]: 2008/09/12_16:16:47 debug: stonithd_signon: creating connection
lrmd[31376]: 2008/09/12_16:16:47 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:16:47 debug: client STONITH_RA_EXEC_31376 (pid=31376) succeeded to signon to stonithd.
lrmd[31376]: 2008/09/12_16:16:47 debug: signed on to stonithd.
lrmd[31376]: 2008/09/12_16:16:47 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:16:47 debug: client STONITH_RA_EXEC_31376 [pid: 31376] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[31377]: 2008/09/12_16:16:47 debug: external_status: called.
stonithd[31377]: 2008/09/12_16:16:47 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[31376]: 2008/09/12_16:16:47 debug: a stonith RA operation queue to run, call_id=31377.
lrmd[31376]: 2008/09/12_16:16:47 debug: stonithd_receive_ops_result: begin
stonithd[31377]: 2008/09/12_16:16:47 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[31377]: 2008/09/12_16:16:47 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:16:47 debug: Child process external_prmFencing:0_monitor [31377] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:16:47 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:16:47 debug: client STONITH_RA_EXEC_31376 (pid=31376) signed off
crmd[30009]: 2008/09/12_16:16:51 info: handle_request: Current ping state: S_TRANSITION_ENGINE
lrmd[31530]: 2008/09/12_16:16:52 debug: stonithd_signon: creating connection
lrmd[31530]: 2008/09/12_16:16:52 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:16:52 debug: client STONITH_RA_EXEC_31530 (pid=31530) succeeded to signon to stonithd.
lrmd[31530]: 2008/09/12_16:16:52 debug: signed on to stonithd.
lrmd[31530]: 2008/09/12_16:16:52 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:16:52 debug: client STONITH_RA_EXEC_31530 [pid: 31530] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
lrmd[31530]: 2008/09/12_16:16:52 debug: a stonith RA operation queue to run, call_id=31531.
stonithd[31531]: 2008/09/12_16:16:52 debug: external_status: called.
lrmd[31530]: 2008/09/12_16:16:52 debug: stonithd_receive_ops_result: begin
stonithd[31531]: 2008/09/12_16:16:52 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
stonithd[31531]: 2008/09/12_16:16:53 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[31531]: 2008/09/12_16:16:53 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:16:53 debug: Child process external_prmFencing:0_monitor [31531] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:16:53 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:16:53 debug: client STONITH_RA_EXEC_31530 (pid=31530) signed off
crmd[30009]: 2008/09/12_16:16:53 info: handle_request: Current ping state: S_TRANSITION_ENGINE
crmd[30009]: 2008/09/12_16:16:55 info: handle_request: Current ping state: S_TRANSITION_ENGINE
crmd[30009]: 2008/09/12_16:16:55 info: handle_request: Current ping state: S_TRANSITION_ENGINE
lrmd[31954]: 2008/09/12_16:16:58 debug: stonithd_signon: creating connection
lrmd[31954]: 2008/09/12_16:16:58 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:16:58 debug: client STONITH_RA_EXEC_31954 (pid=31954) succeeded to signon to stonithd.
lrmd[31954]: 2008/09/12_16:16:58 debug: signed on to stonithd.
lrmd[31954]: 2008/09/12_16:16:58 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:16:58 debug: client STONITH_RA_EXEC_31954 [pid: 31954] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[31955]: 2008/09/12_16:16:58 debug: external_status: called.
stonithd[31955]: 2008/09/12_16:16:58 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[31954]: 2008/09/12_16:16:58 debug: a stonith RA operation queue to run, call_id=31955.
lrmd[31954]: 2008/09/12_16:16:58 debug: stonithd_receive_ops_result: begin
stonithd[31955]: 2008/09/12_16:16:58 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[31955]: 2008/09/12_16:16:58 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:16:58 debug: Child process external_prmFencing:0_monitor [31955] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:16:58 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:16:58 debug: client STONITH_RA_EXEC_31954 (pid=31954) signed off
lrmd[32099]: 2008/09/12_16:17:03 debug: stonithd_signon: creating connection
lrmd[32099]: 2008/09/12_16:17:03 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:17:03 debug: client STONITH_RA_EXEC_32099 (pid=32099) succeeded to signon to stonithd.
lrmd[32099]: 2008/09/12_16:17:03 debug: signed on to stonithd.
lrmd[32099]: 2008/09/12_16:17:03 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:17:03 debug: client STONITH_RA_EXEC_32099 [pid: 32099] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[32100]: 2008/09/12_16:17:03 debug: external_status: called.
stonithd[32100]: 2008/09/12_16:17:03 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[32099]: 2008/09/12_16:17:03 debug: a stonith RA operation queue to run, call_id=32100.
lrmd[32099]: 2008/09/12_16:17:03 debug: stonithd_receive_ops_result: begin
stonithd[32100]: 2008/09/12_16:17:03 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[32100]: 2008/09/12_16:17:03 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:17:03 debug: Child process external_prmFencing:0_monitor [32100] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:17:03 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:17:03 debug: client STONITH_RA_EXEC_32099 (pid=32099) signed off
lrmd[32123]: 2008/09/12_16:17:08 debug: stonithd_signon: creating connection
lrmd[32123]: 2008/09/12_16:17:08 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:17:08 debug: client STONITH_RA_EXEC_32123 (pid=32123) succeeded to signon to stonithd.
lrmd[32123]: 2008/09/12_16:17:08 debug: signed on to stonithd.
lrmd[32123]: 2008/09/12_16:17:08 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:17:08 debug: client STONITH_RA_EXEC_32123 [pid: 32123] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[32124]: 2008/09/12_16:17:08 debug: external_status: called.
stonithd[32124]: 2008/09/12_16:17:08 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[32123]: 2008/09/12_16:17:08 debug: a stonith RA operation queue to run, call_id=32124.
lrmd[32123]: 2008/09/12_16:17:08 debug: stonithd_receive_ops_result: begin
stonithd[32124]: 2008/09/12_16:17:08 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[32124]: 2008/09/12_16:17:08 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:17:08 debug: Child process external_prmFencing:0_monitor [32124] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:17:08 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:17:08 debug: client STONITH_RA_EXEC_32123 (pid=32123) signed off
lrmd[32150]: 2008/09/12_16:17:13 debug: stonithd_signon: creating connection
lrmd[32150]: 2008/09/12_16:17:13 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:17:13 debug: client STONITH_RA_EXEC_32150 (pid=32150) succeeded to signon to stonithd.
lrmd[32150]: 2008/09/12_16:17:13 debug: signed on to stonithd.
lrmd[32150]: 2008/09/12_16:17:13 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:17:13 debug: client STONITH_RA_EXEC_32150 [pid: 32150] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[32151]: 2008/09/12_16:17:13 debug: external_status: called.
stonithd[32151]: 2008/09/12_16:17:13 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[32150]: 2008/09/12_16:17:13 debug: a stonith RA operation queue to run, call_id=32151.
lrmd[32150]: 2008/09/12_16:17:13 debug: stonithd_receive_ops_result: begin
stonithd[32151]: 2008/09/12_16:17:13 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[32151]: 2008/09/12_16:17:13 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:17:13 debug: Child process external_prmFencing:0_monitor [32151] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:17:13 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:17:13 debug: client STONITH_RA_EXEC_32150 (pid=32150) signed off
lrmd[32174]: 2008/09/12_16:17:18 debug: stonithd_signon: creating connection
lrmd[32174]: 2008/09/12_16:17:18 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:17:18 debug: client STONITH_RA_EXEC_32174 (pid=32174) succeeded to signon to stonithd.
lrmd[32174]: 2008/09/12_16:17:18 debug: signed on to stonithd.
lrmd[32174]: 2008/09/12_16:17:18 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:17:18 debug: client STONITH_RA_EXEC_32174 [pid: 32174] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[32175]: 2008/09/12_16:17:18 debug: external_status: called.
stonithd[32175]: 2008/09/12_16:17:18 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[32174]: 2008/09/12_16:17:18 debug: a stonith RA operation queue to run, call_id=32175.
lrmd[32174]: 2008/09/12_16:17:18 debug: stonithd_receive_ops_result: begin
stonithd[32175]: 2008/09/12_16:17:18 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[32175]: 2008/09/12_16:17:18 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:17:18 debug: Child process external_prmFencing:0_monitor [32175] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:17:18 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:17:18 debug: client STONITH_RA_EXEC_32174 (pid=32174) signed off
lrmd[32198]: 2008/09/12_16:17:23 debug: stonithd_signon: creating connection
lrmd[32198]: 2008/09/12_16:17:23 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:17:23 debug: client STONITH_RA_EXEC_32198 (pid=32198) succeeded to signon to stonithd.
lrmd[32198]: 2008/09/12_16:17:23 debug: signed on to stonithd.
lrmd[32198]: 2008/09/12_16:17:23 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:17:23 debug: client STONITH_RA_EXEC_32198 [pid: 32198] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[32199]: 2008/09/12_16:17:23 debug: external_status: called.
stonithd[32199]: 2008/09/12_16:17:23 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[32198]: 2008/09/12_16:17:23 debug: a stonith RA operation queue to run, call_id=32199.
lrmd[32198]: 2008/09/12_16:17:23 debug: stonithd_receive_ops_result: begin
stonithd[32199]: 2008/09/12_16:17:23 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[32199]: 2008/09/12_16:17:23 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:17:23 debug: Child process external_prmFencing:0_monitor [32199] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:17:23 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:17:23 debug: client STONITH_RA_EXEC_32198 (pid=32198) signed off
lrmd[32225]: 2008/09/12_16:17:28 debug: stonithd_signon: creating connection
lrmd[32225]: 2008/09/12_16:17:28 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:17:28 debug: client STONITH_RA_EXEC_32225 (pid=32225) succeeded to signon to stonithd.
lrmd[32225]: 2008/09/12_16:17:28 debug: signed on to stonithd.
lrmd[32225]: 2008/09/12_16:17:28 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:17:28 debug: client STONITH_RA_EXEC_32225 [pid: 32225] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[32226]: 2008/09/12_16:17:28 debug: external_status: called.
stonithd[32226]: 2008/09/12_16:17:28 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[32225]: 2008/09/12_16:17:28 debug: a stonith RA operation queue to run, call_id=32226.
lrmd[32225]: 2008/09/12_16:17:28 debug: stonithd_receive_ops_result: begin
stonithd[32226]: 2008/09/12_16:17:28 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[32226]: 2008/09/12_16:17:28 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:17:28 debug: Child process external_prmFencing:0_monitor [32226] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:17:28 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:17:28 debug: client STONITH_RA_EXEC_32225 (pid=32225) signed off
lrmd[32249]: 2008/09/12_16:17:33 debug: stonithd_signon: creating connection
lrmd[32249]: 2008/09/12_16:17:33 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:17:33 debug: client STONITH_RA_EXEC_32249 (pid=32249) succeeded to signon to stonithd.
lrmd[32249]: 2008/09/12_16:17:33 debug: signed on to stonithd.
lrmd[32249]: 2008/09/12_16:17:33 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:17:33 debug: client STONITH_RA_EXEC_32249 [pid: 32249] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[32250]: 2008/09/12_16:17:33 debug: external_status: called.
stonithd[32250]: 2008/09/12_16:17:33 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[32249]: 2008/09/12_16:17:33 debug: a stonith RA operation queue to run, call_id=32250.
lrmd[32249]: 2008/09/12_16:17:33 debug: stonithd_receive_ops_result: begin
stonithd[32250]: 2008/09/12_16:17:33 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[32250]: 2008/09/12_16:17:33 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:17:33 debug: Child process external_prmFencing:0_monitor [32250] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:17:33 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:17:33 debug: client STONITH_RA_EXEC_32249 (pid=32249) signed off
lrmd[32277]: 2008/09/12_16:17:38 debug: stonithd_signon: creating connection
stonithd[30007]: 2008/09/12_16:17:38 debug: client STONITH_RA_EXEC_32277 (pid=32277) succeeded to signon to stonithd.
lrmd[32277]: 2008/09/12_16:17:38 debug: sending out the signon msg.
lrmd[32277]: 2008/09/12_16:17:38 debug: signed on to stonithd.
lrmd[32277]: 2008/09/12_16:17:38 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:17:38 debug: client STONITH_RA_EXEC_32277 [pid: 32277] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[32278]: 2008/09/12_16:17:38 debug: external_status: called.
stonithd[32278]: 2008/09/12_16:17:38 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[32277]: 2008/09/12_16:17:38 debug: a stonith RA operation queue to run, call_id=32278.
lrmd[32277]: 2008/09/12_16:17:38 debug: stonithd_receive_ops_result: begin
stonithd[32278]: 2008/09/12_16:17:38 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[32278]: 2008/09/12_16:17:38 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:17:38 debug: Child process external_prmFencing:0_monitor [32278] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:17:38 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:17:38 debug: client STONITH_RA_EXEC_32277 (pid=32277) signed off
lrmd[32301]: 2008/09/12_16:17:43 debug: stonithd_signon: creating connection
lrmd[32301]: 2008/09/12_16:17:43 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:17:43 debug: client STONITH_RA_EXEC_32301 (pid=32301) succeeded to signon to stonithd.
lrmd[32301]: 2008/09/12_16:17:43 debug: signed on to stonithd.
lrmd[32301]: 2008/09/12_16:17:43 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:17:43 debug: client STONITH_RA_EXEC_32301 [pid: 32301] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[32302]: 2008/09/12_16:17:43 debug: external_status: called.
stonithd[32302]: 2008/09/12_16:17:43 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[32301]: 2008/09/12_16:17:43 debug: a stonith RA operation queue to run, call_id=32302.
lrmd[32301]: 2008/09/12_16:17:43 debug: stonithd_receive_ops_result: begin
stonithd[32302]: 2008/09/12_16:17:43 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[32302]: 2008/09/12_16:17:43 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:17:43 debug: Child process external_prmFencing:0_monitor [32302] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:17:43 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:17:43 debug: client STONITH_RA_EXEC_32301 (pid=32301) signed off
lrmd[32327]: 2008/09/12_16:17:48 debug: stonithd_signon: creating connection
lrmd[32327]: 2008/09/12_16:17:48 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:17:48 debug: client STONITH_RA_EXEC_32327 (pid=32327) succeeded to signon to stonithd.
lrmd[32327]: 2008/09/12_16:17:48 debug: signed on to stonithd.
lrmd[32327]: 2008/09/12_16:17:48 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:17:48 debug: client STONITH_RA_EXEC_32327 [pid: 32327] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
lrmd[32327]: 2008/09/12_16:17:48 debug: a stonith RA operation queue to run, call_id=32328.
stonithd[32328]: 2008/09/12_16:17:48 debug: external_status: called.
lrmd[32327]: 2008/09/12_16:17:48 debug: stonithd_receive_ops_result: begin
stonithd[32328]: 2008/09/12_16:17:48 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
stonithd[32328]: 2008/09/12_16:17:48 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[32328]: 2008/09/12_16:17:48 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:17:48 debug: Child process external_prmFencing:0_monitor [32328] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:17:48 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:17:48 debug: client STONITH_RA_EXEC_32327 (pid=32327) signed off
lrmd[32353]: 2008/09/12_16:17:53 debug: stonithd_signon: creating connection
lrmd[32353]: 2008/09/12_16:17:53 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:17:53 debug: client STONITH_RA_EXEC_32353 (pid=32353) succeeded to signon to stonithd.
lrmd[32353]: 2008/09/12_16:17:53 debug: signed on to stonithd.
lrmd[32353]: 2008/09/12_16:17:54 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:17:54 debug: client STONITH_RA_EXEC_32353 [pid: 32353] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
lrmd[32353]: 2008/09/12_16:17:54 debug: a stonith RA operation queue to run, call_id=32354.
stonithd[32354]: 2008/09/12_16:17:54 debug: external_status: called.
lrmd[32353]: 2008/09/12_16:17:54 debug: stonithd_receive_ops_result: begin
stonithd[32354]: 2008/09/12_16:17:54 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
stonithd[32354]: 2008/09/12_16:17:54 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[32354]: 2008/09/12_16:17:54 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:17:54 debug: Child process external_prmFencing:0_monitor [32354] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:17:54 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:17:54 debug: client STONITH_RA_EXEC_32353 (pid=32353) signed off
lrmd[32378]: 2008/09/12_16:17:59 debug: stonithd_signon: creating connection
lrmd[32378]: 2008/09/12_16:17:59 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:17:59 debug: client STONITH_RA_EXEC_32378 (pid=32378) succeeded to signon to stonithd.
lrmd[32378]: 2008/09/12_16:17:59 debug: signed on to stonithd.
lrmd[32378]: 2008/09/12_16:17:59 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:17:59 debug: client STONITH_RA_EXEC_32378 [pid: 32378] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[32379]: 2008/09/12_16:17:59 debug: external_status: called.
stonithd[32379]: 2008/09/12_16:17:59 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[32378]: 2008/09/12_16:17:59 debug: a stonith RA operation queue to run, call_id=32379.
lrmd[32378]: 2008/09/12_16:17:59 debug: stonithd_receive_ops_result: begin
stonithd[32379]: 2008/09/12_16:17:59 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[32379]: 2008/09/12_16:17:59 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:17:59 debug: Child process external_prmFencing:0_monitor [32379] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:17:59 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:17:59 debug: client STONITH_RA_EXEC_32378 (pid=32378) signed off
lrmd[32405]: 2008/09/12_16:18:04 debug: stonithd_signon: creating connection
stonithd[30007]: 2008/09/12_16:18:04 debug: client STONITH_RA_EXEC_32405 (pid=32405) succeeded to signon to stonithd.
lrmd[32405]: 2008/09/12_16:18:04 debug: sending out the signon msg.
lrmd[32405]: 2008/09/12_16:18:04 debug: signed on to stonithd.
lrmd[32405]: 2008/09/12_16:18:04 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:18:04 debug: client STONITH_RA_EXEC_32405 [pid: 32405] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[32406]: 2008/09/12_16:18:04 debug: external_status: called.
stonithd[32406]: 2008/09/12_16:18:04 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[32405]: 2008/09/12_16:18:04 debug: a stonith RA operation queue to run, call_id=32406.
lrmd[32405]: 2008/09/12_16:18:04 debug: stonithd_receive_ops_result: begin
stonithd[32406]: 2008/09/12_16:18:04 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[32406]: 2008/09/12_16:18:04 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:18:04 debug: Child process external_prmFencing:0_monitor [32406] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:18:04 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:18:04 debug: client STONITH_RA_EXEC_32405 (pid=32405) signed off
lrmd[32429]: 2008/09/12_16:18:09 debug: stonithd_signon: creating connection
lrmd[32429]: 2008/09/12_16:18:09 debug: sending out the signon msg.
stonithd[30007]: 2008/09/12_16:18:09 debug: client STONITH_RA_EXEC_32429 (pid=32429) succeeded to signon to stonithd.
lrmd[32429]: 2008/09/12_16:18:09 debug: signed on to stonithd.
lrmd[32429]: 2008/09/12_16:18:09 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:18:09 debug: client STONITH_RA_EXEC_32429 [pid: 32429] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[32430]: 2008/09/12_16:18:09 debug: external_status: called.
stonithd[32430]: 2008/09/12_16:18:09 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[32429]: 2008/09/12_16:18:09 debug: a stonith RA operation queue to run, call_id=32430.
lrmd[32429]: 2008/09/12_16:18:09 debug: stonithd_receive_ops_result: begin
stonithd[32430]: 2008/09/12_16:18:09 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[32430]: 2008/09/12_16:18:09 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:18:09 debug: Child process external_prmFencing:0_monitor [32430] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:18:09 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:18:09 debug: client STONITH_RA_EXEC_32429 (pid=32429) signed off
crmd[30009]: 2008/09/12_16:18:14 info: handle_request: Current ping state: S_TRANSITION_ENGINE
lrmd[32568]: 2008/09/12_16:18:14 debug: stonithd_signon: creating connection
stonithd[30007]: 2008/09/12_16:18:14 debug: client STONITH_RA_EXEC_32568 (pid=32568) succeeded to signon to stonithd.
lrmd[32568]: 2008/09/12_16:18:14 debug: sending out the signon msg.
lrmd[32568]: 2008/09/12_16:18:14 debug: signed on to stonithd.
lrmd[32568]: 2008/09/12_16:18:14 debug: waiting for the stonithRA reply msg.
stonithd[30007]: 2008/09/12_16:18:14 debug: client STONITH_RA_EXEC_32568 [pid: 32568] requests a resource operation monitor on prmFencing:0 (external/sshTEST)
stonithd[32569]: 2008/09/12_16:18:14 debug: external_status: called.
stonithd[32569]: 2008/09/12_16:18:14 debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/sshTEST status'
lrmd[32568]: 2008/09/12_16:18:14 debug: a stonith RA operation queue to run, call_id=32569.
lrmd[32568]: 2008/09/12_16:18:14 debug: stonithd_receive_ops_result: begin
stonithd[32569]: 2008/09/12_16:18:14 debug: external_run_cmd: '/usr/lib/stonith/plugins/external/sshTEST status' output: 
stonithd[32569]: 2008/09/12_16:18:14 debug: external_status: running 'sshTEST status' returned 0
stonithd[30007]: 2008/09/12_16:18:14 debug: Child process external_prmFencing:0_monitor [32569] exited, its exit code: 0 when signo=0.
stonithd[30007]: 2008/09/12_16:18:14 debug: prmFencing:0's (external/sshTEST) op monitor finished. op_result=0
stonithd[30007]: 2008/09/12_16:18:14 debug: client STONITH_RA_EXEC_32568 (pid=32568) signed off
crmd[30009]: 2008/09/12_16:18:16 info: handle_request: Current ping state: S_TRANSITION_ENGINE
