Sep 10 15:21:14 corosync [MAIN  ] Corosync Cluster Engine ('1.4.3'): started and ready to provide service.
Sep 10 15:21:14 corosync [MAIN  ] Corosync built-in features: nss
Sep 10 15:21:14 corosync [MAIN  ] Successfully read config from /etc/cluster/cluster.conf
Sep 10 15:21:14 corosync [MAIN  ] Successfully parsed cman config
Sep 10 15:21:14 corosync [MAIN  ] Successfully configured openais services to load
Sep 10 15:21:14 corosync [TOTEM ] Initializing transport (UDP/IP Unicast).
Sep 10 15:21:14 corosync [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
Sep 10 15:21:14 corosync [TOTEM ] The network interface [192.168.1.200] is now up.
Sep 10 15:21:14 corosync [QUORUM] Using quorum provider quorum_cman
Sep 10 15:21:14 corosync [SERV  ] Service engine loaded: corosync cluster quorum service v0.1
Sep 10 15:21:14 corosync [CMAN  ] CMAN 1341928020 (built Jul 10 2012 16:47:17) started
Sep 10 15:21:14 corosync [SERV  ] Service engine loaded: corosync CMAN membership service 2.90
Sep 10 15:21:14 corosync [SERV  ] Service engine loaded: openais cluster membership service B.01.01
Sep 10 15:21:14 corosync [SERV  ] Service engine loaded: openais event service B.01.01
Sep 10 15:21:14 corosync [SERV  ] Service engine loaded: openais checkpoint service B.01.01
Sep 10 15:21:14 corosync [SERV  ] Service engine loaded: openais message service B.03.01
Sep 10 15:21:14 corosync [SERV  ] Service engine loaded: openais distributed locking service B.03.01
Sep 10 15:21:14 corosync [SERV  ] Service engine loaded: openais timer service A.01.01
Sep 10 15:21:14 corosync [SERV  ] Service engine loaded: corosync extended virtual synchrony service
Sep 10 15:21:14 corosync [SERV  ] Service engine loaded: corosync configuration service
Sep 10 15:21:14 corosync [SERV  ] Service engine loaded: corosync cluster closed process group service v1.01
Sep 10 15:21:14 corosync [SERV  ] Service engine loaded: corosync cluster config database access v1.01
Sep 10 15:21:14 corosync [SERV  ] Service engine loaded: corosync profile loading service
Sep 10 15:21:14 corosync [QUORUM] Using quorum provider quorum_cman
Sep 10 15:21:14 corosync [SERV  ] Service engine loaded: corosync cluster quorum service v0.1
Sep 10 15:21:14 corosync [MAIN  ] Compatibility mode set to whitetank.  Using V1 and V2 of the synchronization engine.
Sep 10 15:21:14 corosync [CLM   ] CLM CONFIGURATION CHANGE
Sep 10 15:21:14 corosync [CLM   ] New Configuration:
Sep 10 15:21:14 corosync [CLM   ] Members Left:
Sep 10 15:21:14 corosync [CLM   ] Members Joined:
Sep 10 15:21:14 corosync [CLM   ] CLM CONFIGURATION CHANGE
Sep 10 15:21:14 corosync [CLM   ] New Configuration:
Sep 10 15:21:14 corosync [CLM   ] 	r(0) ip(192.168.1.200) 
Sep 10 15:21:14 corosync [CLM   ] Members Left:
Sep 10 15:21:14 corosync [CLM   ] Members Joined:
Sep 10 15:21:14 corosync [CLM   ] 	r(0) ip(192.168.1.200) 
Sep 10 15:21:14 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.
Sep 10 15:21:14 corosync [CMAN  ] quorum regained, resuming activity
Sep 10 15:21:14 corosync [QUORUM] This node is within the primary component and will provide service.
Sep 10 15:21:14 corosync [QUORUM] Members[1]: 2
Sep 10 15:21:14 corosync [QUORUM] Members[1]: 2
Sep 10 15:21:14 corosync [CPG   ] chosen downlist: sender r(0) ip(192.168.1.200) ; members(old:0 left:0)
Sep 10 15:21:14 corosync [MAIN  ] Completed service synchronization, ready to provide service.
Sep 10 15:21:15 corosync [MAIN  ] Corosync Cluster Engine ('1.4.3'): started and ready to provide service.
Sep 10 15:21:15 corosync [MAIN  ] Corosync built-in features: nss
Sep 10 15:21:15 corosync [MAIN  ] Successfully read config from /etc/cluster/cluster.conf
Sep 10 15:21:15 corosync [MAIN  ] Successfully parsed cman config
Sep 10 15:21:15 corosync [MAIN  ] Successfully configured openais services to load
Sep 10 15:21:15 corosync [TOTEM ] Initializing transport (UDP/IP Unicast).
Sep 10 15:21:15 corosync [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
Sep 10 15:21:15 corosync [TOTEM ] The network interface [192.168.1.199] is now up.
Sep 10 15:21:15 corosync [QUORUM] Using quorum provider quorum_cman
Sep 10 15:21:15 corosync [SERV  ] Service engine loaded: corosync cluster quorum service v0.1
Sep 10 15:21:15 corosync [CMAN  ] CMAN 1341928020 (built Jul 10 2012 16:47:17) started
Sep 10 15:21:15 corosync [SERV  ] Service engine loaded: corosync CMAN membership service 2.90
Sep 10 15:21:15 corosync [SERV  ] Service engine loaded: openais cluster membership service B.01.01
Sep 10 15:21:15 corosync [SERV  ] Service engine loaded: openais event service B.01.01
Sep 10 15:21:15 corosync [SERV  ] Service engine loaded: openais checkpoint service B.01.01
Sep 10 15:21:15 corosync [SERV  ] Service engine loaded: openais message service B.03.01
Sep 10 15:21:15 corosync [SERV  ] Service engine loaded: openais distributed locking service B.03.01
Sep 10 15:21:15 corosync [SERV  ] Service engine loaded: openais timer service A.01.01
Sep 10 15:21:15 corosync [SERV  ] Service engine loaded: corosync extended virtual synchrony service
Sep 10 15:21:15 corosync [SERV  ] Service engine loaded: corosync configuration service
Sep 10 15:21:15 corosync [SERV  ] Service engine loaded: corosync cluster closed process group service v1.01
Sep 10 15:21:15 corosync [SERV  ] Service engine loaded: corosync cluster config database access v1.01
Sep 10 15:21:15 corosync [SERV  ] Service engine loaded: corosync profile loading service
Sep 10 15:21:15 corosync [QUORUM] Using quorum provider quorum_cman
Sep 10 15:21:15 corosync [SERV  ] Service engine loaded: corosync cluster quorum service v0.1
Sep 10 15:21:15 corosync [MAIN  ] Compatibility mode set to whitetank.  Using V1 and V2 of the synchronization engine.
Sep 10 15:21:15 corosync [CLM   ] CLM CONFIGURATION CHANGE
Sep 10 15:21:15 corosync [CLM   ] New Configuration:
Sep 10 15:21:15 corosync [CLM   ] Members Left:
Sep 10 15:21:15 corosync [CLM   ] Members Joined:
Sep 10 15:21:15 corosync [CLM   ] CLM CONFIGURATION CHANGE
Sep 10 15:21:15 corosync [CLM   ] New Configuration:
Sep 10 15:21:15 corosync [CLM   ] 	r(0) ip(192.168.1.199) 
Sep 10 15:21:15 corosync [CLM   ] Members Left:
Sep 10 15:21:15 corosync [CLM   ] Members Joined:
Sep 10 15:21:15 corosync [CLM   ] 	r(0) ip(192.168.1.199) 
Sep 10 15:21:15 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.
Sep 10 15:21:15 corosync [CLM   ] CLM CONFIGURATION CHANGE
Sep 10 15:21:15 corosync [CLM   ] New Configuration:
Sep 10 15:21:15 corosync [CLM   ] 	r(0) ip(192.168.1.199) 
Sep 10 15:21:15 corosync [CLM   ] Members Left:
Sep 10 15:21:15 corosync [CLM   ] Members Joined:
Sep 10 15:21:15 corosync [CLM   ] CLM CONFIGURATION CHANGE
Sep 10 15:21:15 corosync [CLM   ] New Configuration:
Sep 10 15:21:15 corosync [CLM   ] 	r(0) ip(192.168.1.199) 
Sep 10 15:21:15 corosync [CLM   ] 	r(0) ip(192.168.1.200) 
Sep 10 15:21:15 corosync [CLM   ] Members Left:
Sep 10 15:21:15 corosync [CLM   ] Members Joined:
Sep 10 15:21:15 corosync [CLM   ] 	r(0) ip(192.168.1.200) 
Sep 10 15:21:15 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.
Sep 10 15:21:15 corosync [CMAN  ] quorum regained, resuming activity
Sep 10 15:21:15 corosync [QUORUM] This node is within the primary component and will provide service.
Sep 10 15:21:15 corosync [QUORUM] Members[1]: 2
Sep 10 15:21:15 corosync [QUORUM] Members[1]: 2
Sep 10 15:21:15 corosync [QUORUM] Members[2]: 1 2
Sep 10 15:21:15 corosync [QUORUM] Members[2]: 1 2
Sep 10 15:21:15 corosync [CPG   ] chosen downlist: sender r(0) ip(192.168.1.200) ; members(old:1 left:0)
Sep 10 15:21:15 corosync [MAIN  ] Completed service synchronization, ready to provide service.
Set r/w permissions for uid=666, gid=0 on /var/log/cluster/corosync.log
Sep 10 15:21:15 corosync [CLM   ] CLM CONFIGURATION CHANGE
Sep 10 15:21:15 corosync [CLM   ] New Configuration:
Sep 10 15:21:15 corosync [CLM   ] 	r(0) ip(192.168.1.200) 
Sep 10 15:21:15 corosync [CLM   ] Members Left:
Sep 10 15:21:15 corosync [CLM   ] Members Joined:
Sep 10 15:21:15 corosync [CLM   ] CLM CONFIGURATION CHANGE
Sep 10 15:21:15 corosync [CLM   ] New Configuration:
Sep 10 15:21:15 corosync [CLM   ] 	r(0) ip(192.168.1.199) 
Sep 10 15:21:15 corosync [CLM   ] 	r(0) ip(192.168.1.200) 
Sep 10 15:21:15 corosync [CLM   ] Members Left:
Sep 10 15:21:15 corosync [CLM   ] Members Joined:
Sep 10 15:21:15 corosync [CLM   ] 	r(0) ip(192.168.1.199) 
Sep 10 15:21:15 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.
Sep 10 15:21:15 corosync [QUORUM] Members[2]: 1 2
Sep 10 15:21:15 corosync [QUORUM] Members[2]: 1 2
Sep 10 15:21:15 corosync [CPG   ] chosen downlist: sender r(0) ip(192.168.1.200) ; members(old:1 left:0)
Sep 10 15:21:15 corosync [MAIN  ] Completed service synchronization, ready to provide service.
Set r/w permissions for uid=666, gid=0 on /var/log/cluster/corosync.log
Sep 10 15:21:18 Cluster-Server-2 pacemakerd: [40187]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/root
Sep 10 15:21:18 Cluster-Server-2 pacemakerd: [40187]: notice: main: Starting Pacemaker 1.1.7 (Build: ee0730e13d124c3d58f00016c3376a1de5323cff):   corosync-plugin cman

Sep 10 15:21:18 Cluster-Server-2 pacemakerd: [40187]: info: main: Maximum core file size is: 18446744073709551615
Sep 10 15:21:18 Cluster-Server-2 pacemakerd: [40187]: debug: cluster_connect_cfg: Our nodeid: 2
Sep 10 15:21:18 Cluster-Server-2 pacemakerd: [40187]: debug: cluster_connect_cfg: Adding fd=6 to mainloop
Sep 10 15:21:18 Cluster-Server-2 pacemakerd: [40187]: debug: cluster_connect_cpg: Our nodeid: 2
Sep 10 15:21:18 Cluster-Server-2 pacemakerd: [40187]: debug: cluster_connect_cpg: Adding fd=7 to mainloop
Sep 10 15:21:18 Cluster-Server-2 pacemakerd: [40187]: info: get_local_node_name: Using CMAN node name: Cluster-Server-2
Sep 10 15:21:18 Cluster-Server-2 pacemakerd: [40187]: notice: update_node_processes: 0x1cb4db0 Node 2 now known as Cluster-Server-2, was: 
Sep 10 15:21:18 Cluster-Server-2 pacemakerd: [40187]: debug: update_node_processes: Node Cluster-Server-2 now has process list: 00000000000000000000000000000002 (was 00000000000000000000000000000000)
Sep 10 15:21:18 Cluster-Server-2 pacemakerd: [40187]: info: start_child: Forked child 40192 for process cib
Sep 10 15:21:18 Cluster-Server-2 pacemakerd: [40187]: debug: update_node_processes: Node Cluster-Server-2 now has process list: 00000000000000000000000000000102 (was 00000000000000000000000000000002)
Sep 10 15:21:18 Cluster-Server-2 pacemakerd: [40187]: info: start_child: Forked child 40193 for process stonith-ng
Sep 10 15:21:18 Cluster-Server-2 pacemakerd: [40187]: debug: update_node_processes: Node Cluster-Server-2 now has process list: 00000000000000000000000000100102 (was 00000000000000000000000000000102)
Sep 10 15:21:18 Cluster-Server-2 pacemakerd: [40187]: info: start_child: Forked child 40194 for process lrmd
Sep 10 15:21:18 Cluster-Server-2 pacemakerd: [40187]: debug: update_node_processes: Node Cluster-Server-2 now has process list: 00000000000000000000000000100112 (was 00000000000000000000000000100102)
Sep 10 15:21:18 Cluster-Server-2 pacemakerd: [40187]: info: start_child: Forked child 40195 for process attrd
Sep 10 15:21:18 Cluster-Server-2 pacemakerd: [40187]: debug: update_node_processes: Node Cluster-Server-2 now has process list: 00000000000000000000000000101112 (was 00000000000000000000000000100112)
Sep 10 15:21:18 Cluster-Server-2 pacemakerd: [40187]: info: start_child: Forked child 40196 for process pengine
Sep 10 15:21:18 Cluster-Server-2 pacemakerd: [40187]: debug: update_node_processes: Node Cluster-Server-2 now has process list: 00000000000000000000000000111112 (was 00000000000000000000000000101112)
Sep 10 15:21:18 Cluster-Server-2 pacemakerd: [40187]: info: start_child: Forked child 40197 for process crmd
Sep 10 15:21:18 Cluster-Server-2 pacemakerd: [40187]: debug: update_node_processes: Node Cluster-Server-2 now has process list: 00000000000000000000000000111312 (was 00000000000000000000000000111112)
Sep 10 15:21:18 Cluster-Server-2 pacemakerd: [40187]: info: main: Starting mainloop
Sep 10 15:21:18 Cluster-Server-2 cib: [40192]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/hacluster
Sep 10 15:21:18 Cluster-Server-2 cib: [40192]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
Sep 10 15:21:18 Cluster-Server-2 cib: [40192]: WARN: retrieveCib: Cluster configuration not found: /var/lib/heartbeat/crm/cib.xml
Sep 10 15:21:18 Cluster-Server-2 cib: [40192]: WARN: readCibXmlFile: Primary configuration corrupt or unusable, trying backup...
Sep 10 15:21:18 Cluster-Server-2 cib: [40192]: debug: get_last_sequence: Series file /var/lib/heartbeat/crm/cib.last does not exist
Sep 10 15:21:18 Cluster-Server-2 cib: [40192]: debug: readCibXmlFile: Backup file /var/lib/heartbeat/crm/cib-99.raw not found
Sep 10 15:21:18 Cluster-Server-2 cib: [40192]: WARN: readCibXmlFile: Continuing with an empty configuration.
Sep 10 15:21:18 Cluster-Server-2 cib: [40192]: debug: readCibXmlFile: [on-disk] <cib epoch="0" num_updates="0" admin_epoch="0" validate-with="pacemaker-1.2" >
Sep 10 15:21:18 Cluster-Server-2 cib: [40192]: debug: readCibXmlFile: [on-disk]   <configuration >
Sep 10 15:21:18 Cluster-Server-2 attrd: [40195]: info: Invoked: /usr/libexec/pacemaker/attrd 
Sep 10 15:21:18 Cluster-Server-2 cib: [40192]: debug: readCibXmlFile: [on-disk]     <crm_config />
Sep 10 15:21:18 Cluster-Server-2 cib: [40192]: debug: readCibXmlFile: [on-disk]     <nodes />
Sep 10 15:21:18 Cluster-Server-2 cib: [40192]: debug: readCibXmlFile: [on-disk]     <resources />
Sep 10 15:21:18 Cluster-Server-2 cib: [40192]: debug: readCibXmlFile: [on-disk]     <constraints />
Sep 10 15:21:18 Cluster-Server-2 cib: [40192]: debug: readCibXmlFile: [on-disk]   </configuration>
Sep 10 15:21:18 Cluster-Server-2 cib: [40192]: debug: readCibXmlFile: [on-disk]   <status />
Sep 10 15:21:18 Cluster-Server-2 cib: [40192]: debug: readCibXmlFile: [on-disk] </cib>
Sep 10 15:21:18 Cluster-Server-2 cib: [40192]: info: validate_with_relaxng: Creating RNG parser context
Sep 10 15:21:18 Cluster-Server-2 attrd: [40195]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/hacluster
Sep 10 15:21:18 Cluster-Server-2 attrd: [40195]: info: main: Starting up
Sep 10 15:21:18 Cluster-Server-2 pengine: [40196]: info: Invoked: /usr/libexec/pacemaker/pengine 
Sep 10 15:21:18 Cluster-Server-2 pengine: [40196]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/hacluster
Sep 10 15:21:18 Cluster-Server-2 crmd: [40197]: info: Invoked: /usr/libexec/pacemaker/crmd 
Sep 10 15:21:18 Cluster-Server-2 crmd: [40197]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/root
Sep 10 15:21:18 Cluster-Server-2 pengine: [40196]: debug: main: Checking for old instances of pengine
Sep 10 15:21:18 Cluster-Server-2 lrmd: [40194]: info: enabling coredumps
Sep 10 15:21:18 Cluster-Server-2 lrmd: [40194]: debug: main: run the loop...
Sep 10 15:21:18 Cluster-Server-2 lrmd: [40194]: info: Started.
Sep 10 15:21:18 Cluster-Server-2 attrd: [40195]: info: get_cluster_type: Cluster type is: 'cman'
Sep 10 15:21:18 Cluster-Server-2 pengine: [40196]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pengine
Sep 10 15:21:18 Cluster-Server-2 attrd: [40195]: notice: crm_cluster_connect: Connecting to cluster infrastructure: cman
Sep 10 15:21:18 Cluster-Server-2 crmd: [40197]: notice: main: CRM Git Version: ee0730e13d124c3d58f00016c3376a1de5323cff

Sep 10 15:21:18 Cluster-Server-2 stonith-ng: [40193]: info: Invoked: /usr/libexec/pacemaker/stonithd 
Sep 10 15:21:18 Cluster-Server-2 pengine: [40196]: debug: init_client_ipc_comms_nodispatch: Could not init comms on: /var/run/crm/pengine
Sep 10 15:21:18 Cluster-Server-2 pengine: [40196]: debug: main: Init server comms
Sep 10 15:21:18 Cluster-Server-2 pengine: [40196]: info: main: Starting pengine
Sep 10 15:21:18 Cluster-Server-2 stonith-ng: [40193]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/root
Sep 10 15:21:18 Cluster-Server-2 stonith-ng: [40193]: info: get_cluster_type: Cluster type is: 'cman'
Sep 10 15:21:18 Cluster-Server-2 stonith-ng: [40193]: notice: crm_cluster_connect: Connecting to cluster infrastructure: cman
Sep 10 15:21:18 Cluster-Server-2 crmd: [40197]: debug: crmd_init: Starting crmd
Sep 10 15:21:18 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_STARTUP: [ state=S_STARTING cause=C_STARTUP origin=crmd_init ]
Sep 10 15:21:18 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:21:18 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_STARTUP from crmd_init() received in state S_STARTING
Sep 10 15:21:18 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_STARTUP
Sep 10 15:21:18 Cluster-Server-2 crmd: [40197]: debug: do_startup: Registering Signal Handlers
Sep 10 15:21:18 Cluster-Server-2 crmd: [40197]: debug: do_startup: Creating CIB and LRM objects
Sep 10 15:21:18 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CIB_START
Sep 10 15:21:18 Cluster-Server-2 crmd: [40197]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:21:18 Cluster-Server-2 crmd: [40197]: debug: init_client_ipc_comms_nodispatch: Could not init comms on: /var/run/crm/cib_rw
Sep 10 15:21:18 Cluster-Server-2 crmd: [40197]: debug: cib_native_signon_raw: Connection to command channel failed
Sep 10 15:21:18 Cluster-Server-2 crmd: [40197]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:21:18 Cluster-Server-2 crmd: [40197]: debug: init_client_ipc_comms_nodispatch: Could not init comms on: /var/run/crm/cib_callback
Sep 10 15:21:18 Cluster-Server-2 crmd: [40197]: debug: cib_native_signon_raw: Connection to callback channel failed
Sep 10 15:21:18 Cluster-Server-2 crmd: [40197]: debug: cib_native_signon_raw: Connection to CIB failed: connection failed
Sep 10 15:21:18 Cluster-Server-2 crmd: [40197]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:21:18 Cluster-Server-2 cib: [40192]: debug: activateCibXml: Triggering CIB write for start op
Sep 10 15:21:18 Cluster-Server-2 cib: [40192]: info: startCib: CIB Initialization completed successfully
Sep 10 15:21:18 Cluster-Server-2 cib: [40192]: info: get_cluster_type: Cluster type is: 'cman'
Sep 10 15:21:18 Cluster-Server-2 cib: [40192]: notice: crm_cluster_connect: Connecting to cluster infrastructure: cman
Sep 10 15:21:18 Cluster-Server-2 cib: [40192]: debug: init_cpg_connection: Adding fd=5 to mainloop
Sep 10 15:21:18 Cluster-Server-2 cib: [40192]: info: get_local_node_name: Using CMAN node name: Cluster-Server-2
Sep 10 15:21:18 Cluster-Server-2 cib: [40192]: info: init_ais_connection_once: Connection to 'cman': established
Sep 10 15:21:18 Cluster-Server-2 cib: [40192]: debug: crm_new_peer: Creating entry for node Cluster-Server-2/2
Sep 10 15:21:18 Cluster-Server-2 cib: [40192]: info: crm_new_peer: Node Cluster-Server-2 now has id: 2
Sep 10 15:21:18 Cluster-Server-2 cib: [40192]: info: crm_new_peer: Node 2 is now known as Cluster-Server-2
Sep 10 15:21:18 Cluster-Server-2 cib: [40192]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pcmk
Sep 10 15:21:18 Cluster-Server-2 pacemakerd: [40187]: debug: pcmk_client_connect: Channel 0x1cb77d0 connected: 1 children
Sep 10 15:21:18 Cluster-Server-2 attrd: [40195]: debug: init_cpg_connection: Adding fd=5 to mainloop
Sep 10 15:21:18 Cluster-Server-2 cib: [40192]: info: cib_init: Starting cib mainloop
Sep 10 15:21:18 Cluster-Server-2 cib: [40192]: debug: pcmk_cpg_membership: Member[0] 2 
Sep 10 15:21:18 Cluster-Server-2 cib: [40192]: debug: crm_update_peer: Node Cluster-Server-2: id=2 seen=0 proc=00000000000000000000000000111312 (new)
Sep 10 15:21:18 Cluster-Server-2 cib: [40192]: debug: Forking temp process write_cib_contents
Sep 10 15:21:18 Cluster-Server-2 attrd: [40195]: info: get_local_node_name: Using CMAN node name: Cluster-Server-2
Sep 10 15:21:18 Cluster-Server-2 attrd: [40195]: info: init_ais_connection_once: Connection to 'cman': established
Sep 10 15:21:18 Cluster-Server-2 attrd: [40195]: debug: crm_new_peer: Creating entry for node Cluster-Server-2/2
Sep 10 15:21:18 Cluster-Server-2 attrd: [40195]: info: crm_new_peer: Node Cluster-Server-2 now has id: 2
Sep 10 15:21:18 Cluster-Server-2 attrd: [40195]: info: crm_new_peer: Node 2 is now known as Cluster-Server-2
Sep 10 15:21:18 Cluster-Server-2 attrd: [40195]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pcmk
Sep 10 15:21:18 Cluster-Server-2 attrd: [40195]: info: main: Cluster connection active
Sep 10 15:21:18 Cluster-Server-2 attrd: [40195]: info: main: Accepting attribute updates
Sep 10 15:21:18 Cluster-Server-2 attrd: [40195]: notice: main: Starting mainloop...
Sep 10 15:21:18 Cluster-Server-2 pacemakerd: [40187]: debug: pcmk_client_connect: Channel 0x1cb9350 connected: 2 children
Sep 10 15:21:18 Cluster-Server-2 attrd: [40195]: debug: pcmk_cpg_membership: Member[0] 2 
Sep 10 15:21:18 Cluster-Server-2 attrd: [40195]: debug: crm_update_peer: Node Cluster-Server-2: id=2 seen=0 proc=00000000000000000000000000111312 (new)
Sep 10 15:21:18 Cluster-Server-2 stonith-ng: [40193]: debug: init_cpg_connection: Adding fd=5 to mainloop
Sep 10 15:21:18 Cluster-Server-2 stonith-ng: [40193]: info: get_local_node_name: Using CMAN node name: Cluster-Server-2
Sep 10 15:21:18 Cluster-Server-2 stonith-ng: [40193]: info: init_ais_connection_once: Connection to 'cman': established
Sep 10 15:21:18 Cluster-Server-2 stonith-ng: [40193]: debug: crm_new_peer: Creating entry for node Cluster-Server-2/2
Sep 10 15:21:18 Cluster-Server-2 stonith-ng: [40193]: info: crm_new_peer: Node Cluster-Server-2 now has id: 2
Sep 10 15:21:18 Cluster-Server-2 stonith-ng: [40193]: info: crm_new_peer: Node 2 is now known as Cluster-Server-2
Sep 10 15:21:18 Cluster-Server-2 stonith-ng: [40193]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pcmk
Sep 10 15:21:18 Cluster-Server-2 pacemakerd: [40187]: debug: pcmk_client_connect: Channel 0x1cbac60 connected: 3 children
Sep 10 15:21:18 Cluster-Server-2 stonith-ng: [40193]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:21:18 Cluster-Server-2 stonith-ng: [40193]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:21:18 Cluster-Server-2 stonith-ng: [40193]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:21:18 Cluster-Server-2 stonith-ng: [40193]: notice: setup_cib: Watching for stonith topology changes
Sep 10 15:21:18 Cluster-Server-2 stonith-ng: [40193]: info: main: Starting stonith-ng mainloop
Sep 10 15:21:18 Cluster-Server-2 cib: [40192]: debug: cib_common_callback_worker: Setting cib_diff_notify callbacks for 40193 (119b5587-5ad3-4b31-a5e1-1ce3828d7598): on
Sep 10 15:21:18 Cluster-Server-2 stonith-ng: [40193]: debug: pcmk_cpg_membership: Member[0] 2 
Sep 10 15:21:18 Cluster-Server-2 stonith-ng: [40193]: debug: crm_update_peer: Node Cluster-Server-2: id=2 seen=0 proc=00000000000000000000000000111312 (new)
Sep 10 15:21:18 Cluster-Server-2 cib: [40192]: info: Managed write_cib_contents process 40202 exited with return code 0.
Sep 10 15:21:19 Cluster-Server-1 pacemakerd: [48700]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/root
Sep 10 15:21:19 Cluster-Server-1 pacemakerd: [48700]: notice: main: Starting Pacemaker 1.1.7 (Build: ee0730e13d124c3d58f00016c3376a1de5323cff):   corosync-plugin cman

Sep 10 15:21:19 Cluster-Server-1 pacemakerd: [48700]: info: main: Maximum core file size is: 18446744073709551615
Sep 10 15:21:19 Cluster-Server-1 pacemakerd: [48700]: debug: cluster_connect_cfg: Our nodeid: 1
Sep 10 15:21:19 Cluster-Server-1 pacemakerd: [48700]: debug: cluster_connect_cfg: Adding fd=6 to mainloop
Sep 10 15:21:19 Cluster-Server-1 pacemakerd: [48700]: debug: cluster_connect_cpg: Our nodeid: 1
Sep 10 15:21:19 Cluster-Server-1 pacemakerd: [48700]: debug: cluster_connect_cpg: Adding fd=7 to mainloop
Sep 10 15:21:19 Cluster-Server-1 pacemakerd: [48700]: info: get_local_node_name: Using CMAN node name: Cluster-Server-1
Sep 10 15:21:19 Cluster-Server-1 pacemakerd: [48700]: notice: update_node_processes: 0x17e5e00 Node 1 now known as Cluster-Server-1, was: 
Sep 10 15:21:19 Cluster-Server-1 pacemakerd: [48700]: debug: update_node_processes: Node Cluster-Server-1 now has process list: 00000000000000000000000000000002 (was 00000000000000000000000000000000)
Sep 10 15:21:19 Cluster-Server-1 pacemakerd: [48700]: info: start_child: Forked child 48709 for process cib
Sep 10 15:21:19 Cluster-Server-1 pacemakerd: [48700]: debug: update_node_processes: Node Cluster-Server-1 now has process list: 00000000000000000000000000000102 (was 00000000000000000000000000000002)
Sep 10 15:21:19 Cluster-Server-1 pacemakerd: [48700]: info: start_child: Forked child 48710 for process stonith-ng
Sep 10 15:21:19 Cluster-Server-1 pacemakerd: [48700]: debug: update_node_processes: Node Cluster-Server-1 now has process list: 00000000000000000000000000100102 (was 00000000000000000000000000000102)
Sep 10 15:21:19 Cluster-Server-1 pacemakerd: [48700]: info: start_child: Forked child 48712 for process lrmd
Sep 10 15:21:19 Cluster-Server-1 pacemakerd: [48700]: debug: update_node_processes: Node Cluster-Server-1 now has process list: 00000000000000000000000000100112 (was 00000000000000000000000000100102)
Sep 10 15:21:19 Cluster-Server-1 pacemakerd: [48700]: info: start_child: Forked child 48713 for process attrd
Sep 10 15:21:19 Cluster-Server-1 pacemakerd: [48700]: debug: update_node_processes: Node Cluster-Server-1 now has process list: 00000000000000000000000000101112 (was 00000000000000000000000000100112)
Sep 10 15:21:19 Cluster-Server-1 pacemakerd: [48700]: info: start_child: Forked child 48714 for process pengine
Sep 10 15:21:19 Cluster-Server-1 pacemakerd: [48700]: debug: update_node_processes: Node Cluster-Server-1 now has process list: 00000000000000000000000000111112 (was 00000000000000000000000000101112)
Sep 10 15:21:19 Cluster-Server-1 stonith-ng: [48710]: info: Invoked: /usr/libexec/pacemaker/stonithd 
Sep 10 15:21:19 Cluster-Server-1 pacemakerd: [48700]: info: start_child: Forked child 48715 for process crmd
Sep 10 15:21:19 Cluster-Server-1 pacemakerd: [48700]: debug: update_node_processes: Node Cluster-Server-1 now has process list: 00000000000000000000000000111312 (was 00000000000000000000000000111112)
Sep 10 15:21:19 Cluster-Server-1 pacemakerd: [48700]: info: main: Starting mainloop
Sep 10 15:21:19 Cluster-Server-1 pacemakerd: [48700]: notice: update_node_processes: 0x17ea940 Node 2 now known as Cluster-Server-2, was: 
Sep 10 15:21:19 Cluster-Server-1 pacemakerd: [48700]: debug: update_node_processes: Node Cluster-Server-2 now has process list: 00000000000000000000000000111312 (was 00000000000000000000000000000000)
Sep 10 15:21:19 Cluster-Server-1 lrmd: [48712]: info: enabling coredumps
Sep 10 15:21:19 Cluster-Server-1 lrmd: [48712]: debug: main: run the loop...
Sep 10 15:21:19 Cluster-Server-1 lrmd: [48712]: info: Started.
Sep 10 15:21:19 Cluster-Server-1 crmd: [48715]: info: Invoked: /usr/libexec/pacemaker/crmd 
Sep 10 15:21:19 Cluster-Server-1 cib: [48709]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/hacluster
Sep 10 15:21:19 Cluster-Server-1 cib: [48709]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
Sep 10 15:21:19 Cluster-Server-1 stonith-ng: [48710]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/root
Sep 10 15:21:19 Cluster-Server-1 stonith-ng: [48710]: info: get_cluster_type: Cluster type is: 'cman'
Sep 10 15:21:19 Cluster-Server-1 cib: [48709]: WARN: retrieveCib: Cluster configuration not found: /var/lib/heartbeat/crm/cib.xml
Sep 10 15:21:19 Cluster-Server-1 cib: [48709]: WARN: readCibXmlFile: Primary configuration corrupt or unusable, trying backup...
Sep 10 15:21:19 Cluster-Server-1 stonith-ng: [48710]: notice: crm_cluster_connect: Connecting to cluster infrastructure: cman
Sep 10 15:21:19 Cluster-Server-1 cib: [48709]: debug: get_last_sequence: Series file /var/lib/heartbeat/crm/cib.last does not exist
Sep 10 15:21:19 Cluster-Server-1 cib: [48709]: debug: readCibXmlFile: Backup file /var/lib/heartbeat/crm/cib-99.raw not found
Sep 10 15:21:19 Cluster-Server-1 cib: [48709]: WARN: readCibXmlFile: Continuing with an empty configuration.
Sep 10 15:21:19 Cluster-Server-1 cib: [48709]: debug: readCibXmlFile: [on-disk] <cib epoch="0" num_updates="0" admin_epoch="0" validate-with="pacemaker-1.2" >
Sep 10 15:21:19 Cluster-Server-1 cib: [48709]: debug: readCibXmlFile: [on-disk]   <configuration >
Sep 10 15:21:19 Cluster-Server-1 cib: [48709]: debug: readCibXmlFile: [on-disk]     <crm_config />
Sep 10 15:21:19 Cluster-Server-1 cib: [48709]: debug: readCibXmlFile: [on-disk]     <nodes />
Sep 10 15:21:19 Cluster-Server-1 cib: [48709]: debug: readCibXmlFile: [on-disk]     <resources />
Sep 10 15:21:19 Cluster-Server-1 cib: [48709]: debug: readCibXmlFile: [on-disk]     <constraints />
Sep 10 15:21:19 Cluster-Server-1 cib: [48709]: debug: readCibXmlFile: [on-disk]   </configuration>
Sep 10 15:21:19 Cluster-Server-1 cib: [48709]: debug: readCibXmlFile: [on-disk]   <status />
Sep 10 15:21:19 Cluster-Server-1 cib: [48709]: debug: readCibXmlFile: [on-disk] </cib>
Sep 10 15:21:19 Cluster-Server-1 cib: [48709]: info: validate_with_relaxng: Creating RNG parser context
Sep 10 15:21:19 Cluster-Server-1 attrd: [48713]: info: Invoked: /usr/libexec/pacemaker/attrd 
Sep 10 15:21:19 Cluster-Server-1 attrd: [48713]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/hacluster
Sep 10 15:21:19 Cluster-Server-1 attrd: [48713]: info: main: Starting up
Sep 10 15:21:19 Cluster-Server-1 attrd: [48713]: info: get_cluster_type: Cluster type is: 'cman'
Sep 10 15:21:19 Cluster-Server-1 attrd: [48713]: notice: crm_cluster_connect: Connecting to cluster infrastructure: cman
Sep 10 15:21:19 Cluster-Server-1 crmd: [48715]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/root
Sep 10 15:21:19 Cluster-Server-1 crmd: [48715]: notice: main: CRM Git Version: ee0730e13d124c3d58f00016c3376a1de5323cff

Sep 10 15:21:19 Cluster-Server-1 crmd: [48715]: debug: crmd_init: Starting crmd
Sep 10 15:21:19 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_STARTUP: [ state=S_STARTING cause=C_STARTUP origin=crmd_init ]
Sep 10 15:21:19 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:21:19 Cluster-Server-1 crmd: [48715]: debug: do_log: FSA: Input I_STARTUP from crmd_init() received in state S_STARTING
Sep 10 15:21:19 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_STARTUP
Sep 10 15:21:19 Cluster-Server-1 crmd: [48715]: debug: do_startup: Registering Signal Handlers
Sep 10 15:21:19 Cluster-Server-1 crmd: [48715]: debug: do_startup: Creating CIB and LRM objects
Sep 10 15:21:19 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_CIB_START
Sep 10 15:21:19 Cluster-Server-1 crmd: [48715]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:21:19 Cluster-Server-1 crmd: [48715]: debug: init_client_ipc_comms_nodispatch: Could not init comms on: /var/run/crm/cib_rw
Sep 10 15:21:19 Cluster-Server-1 crmd: [48715]: debug: cib_native_signon_raw: Connection to command channel failed
Sep 10 15:21:19 Cluster-Server-1 crmd: [48715]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:21:19 Cluster-Server-1 pengine: [48714]: info: Invoked: /usr/libexec/pacemaker/pengine 
Sep 10 15:21:19 Cluster-Server-1 pengine: [48714]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/hacluster
Sep 10 15:21:19 Cluster-Server-1 pengine: [48714]: debug: main: Checking for old instances of pengine
Sep 10 15:21:19 Cluster-Server-1 crmd: [48715]: debug: init_client_ipc_comms_nodispatch: Could not init comms on: /var/run/crm/cib_callback
Sep 10 15:21:19 Cluster-Server-1 pengine: [48714]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pengine
Sep 10 15:21:19 Cluster-Server-1 pengine: [48714]: debug: init_client_ipc_comms_nodispatch: Could not init comms on: /var/run/crm/pengine
Sep 10 15:21:19 Cluster-Server-1 pengine: [48714]: debug: main: Init server comms
Sep 10 15:21:19 Cluster-Server-1 pengine: [48714]: info: main: Starting pengine
Sep 10 15:21:19 Cluster-Server-1 crmd: [48715]: debug: cib_native_signon_raw: Connection to callback channel failed
Sep 10 15:21:19 Cluster-Server-1 crmd: [48715]: debug: cib_native_signon_raw: Connection to CIB failed: connection failed
Sep 10 15:21:19 Cluster-Server-1 crmd: [48715]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:21:19 Cluster-Server-1 stonith-ng: [48710]: debug: init_cpg_connection: Adding fd=5 to mainloop
Sep 10 15:21:19 Cluster-Server-1 stonith-ng: [48710]: info: get_local_node_name: Using CMAN node name: Cluster-Server-1
Sep 10 15:21:19 Cluster-Server-1 stonith-ng: [48710]: info: init_ais_connection_once: Connection to 'cman': established
Sep 10 15:21:19 Cluster-Server-1 stonith-ng: [48710]: debug: crm_new_peer: Creating entry for node Cluster-Server-1/1
Sep 10 15:21:19 Cluster-Server-1 stonith-ng: [48710]: info: crm_new_peer: Node Cluster-Server-1 now has id: 1
Sep 10 15:21:19 Cluster-Server-1 stonith-ng: [48710]: info: crm_new_peer: Node 1 is now known as Cluster-Server-1
Sep 10 15:21:19 Cluster-Server-1 stonith-ng: [48710]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pcmk
Sep 10 15:21:19 Cluster-Server-1 pacemakerd: [48700]: debug: pcmk_client_connect: Channel 0x17eb1a0 connected: 1 children
Sep 10 15:21:19 Cluster-Server-1 attrd: [48713]: debug: init_cpg_connection: Adding fd=5 to mainloop
Sep 10 15:21:19 Cluster-Server-1 attrd: [48713]: info: get_local_node_name: Using CMAN node name: Cluster-Server-1
Sep 10 15:21:19 Cluster-Server-1 attrd: [48713]: info: init_ais_connection_once: Connection to 'cman': established
Sep 10 15:21:19 Cluster-Server-1 attrd: [48713]: debug: crm_new_peer: Creating entry for node Cluster-Server-1/1
Sep 10 15:21:19 Cluster-Server-1 attrd: [48713]: info: crm_new_peer: Node Cluster-Server-1 now has id: 1
Sep 10 15:21:19 Cluster-Server-1 attrd: [48713]: info: crm_new_peer: Node 1 is now known as Cluster-Server-1
Sep 10 15:21:19 Cluster-Server-1 attrd: [48713]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pcmk
Sep 10 15:21:19 Cluster-Server-1 attrd: [48713]: info: main: Cluster connection active
Sep 10 15:21:19 Cluster-Server-1 attrd: [48713]: info: main: Accepting attribute updates
Sep 10 15:21:19 Cluster-Server-1 attrd: [48713]: notice: main: Starting mainloop...
Sep 10 15:21:19 Cluster-Server-1 pacemakerd: [48700]: debug: pcmk_client_connect: Channel 0x17ec670 connected: 2 children
Sep 10 15:21:19 Cluster-Server-1 attrd: [48713]: debug: pcmk_cpg_membership: Member[0] 1 
Sep 10 15:21:19 Cluster-Server-1 attrd: [48713]: debug: pcmk_cpg_membership: Member[1] 2 
Sep 10 15:21:19 Cluster-Server-1 attrd: [48713]: debug: crm_update_peer: Node Cluster-Server-1: id=1 seen=0 proc=00000000000000000000000000111312 (new)
Sep 10 15:21:19 Cluster-Server-1 attrd: [48713]: debug: crm_new_peer: Creating entry for node Cluster-Server-2/2
Sep 10 15:21:19 Cluster-Server-1 attrd: [48713]: info: crm_new_peer: Node Cluster-Server-2 now has id: 2
Sep 10 15:21:19 Cluster-Server-1 attrd: [48713]: info: crm_new_peer: Node 2 is now known as Cluster-Server-2
Sep 10 15:21:19 Cluster-Server-1 stonith-ng: [48710]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:21:19 Cluster-Server-1 stonith-ng: [48710]: debug: init_client_ipc_comms_nodispatch: Could not init comms on: /var/run/crm/cib_rw
Sep 10 15:21:19 Cluster-Server-1 stonith-ng: [48710]: debug: cib_native_signon_raw: Connection to command channel failed
Sep 10 15:21:19 Cluster-Server-1 stonith-ng: [48710]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:21:19 Cluster-Server-1 stonith-ng: [48710]: debug: init_client_ipc_comms_nodispatch: Could not init comms on: /var/run/crm/cib_callback
Sep 10 15:21:19 Cluster-Server-1 stonith-ng: [48710]: debug: cib_native_signon_raw: Connection to callback channel failed
Sep 10 15:21:19 Cluster-Server-1 stonith-ng: [48710]: debug: cib_native_signon_raw: Connection to CIB failed: connection failed
Sep 10 15:21:19 Cluster-Server-1 stonith-ng: [48710]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:21:19 Cluster-Server-1 cib: [48709]: debug: activateCibXml: Triggering CIB write for start op
Sep 10 15:21:19 Cluster-Server-1 cib: [48709]: info: startCib: CIB Initialization completed successfully
Sep 10 15:21:19 Cluster-Server-1 cib: [48709]: info: get_cluster_type: Cluster type is: 'cman'
Sep 10 15:21:19 Cluster-Server-1 cib: [48709]: notice: crm_cluster_connect: Connecting to cluster infrastructure: cman
Sep 10 15:21:19 Cluster-Server-1 cib: [48709]: debug: init_cpg_connection: Adding fd=5 to mainloop
Sep 10 15:21:19 Cluster-Server-1 cib: [48709]: info: get_local_node_name: Using CMAN node name: Cluster-Server-1
Sep 10 15:21:19 Cluster-Server-1 cib: [48709]: info: init_ais_connection_once: Connection to 'cman': established
Sep 10 15:21:19 Cluster-Server-1 cib: [48709]: debug: crm_new_peer: Creating entry for node Cluster-Server-1/1
Sep 10 15:21:19 Cluster-Server-1 cib: [48709]: info: crm_new_peer: Node Cluster-Server-1 now has id: 1
Sep 10 15:21:19 Cluster-Server-1 cib: [48709]: info: crm_new_peer: Node 1 is now known as Cluster-Server-1
Sep 10 15:21:19 Cluster-Server-1 cib: [48709]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pcmk
Sep 10 15:21:19 Cluster-Server-1 pacemakerd: [48700]: debug: pcmk_client_connect: Channel 0x17ec8e0 connected: 3 children
Sep 10 15:21:19 Cluster-Server-1 cib: [48709]: info: cib_init: Starting cib mainloop
Sep 10 15:21:19 Cluster-Server-1 cib: [48709]: debug: pcmk_cpg_membership: Member[0] 1 
Sep 10 15:21:19 Cluster-Server-1 cib: [48709]: debug: pcmk_cpg_membership: Member[1] 2 
Sep 10 15:21:19 Cluster-Server-1 cib: [48709]: debug: crm_update_peer: Node Cluster-Server-1: id=1 seen=0 proc=00000000000000000000000000111312 (new)
Sep 10 15:21:19 Cluster-Server-1 cib: [48709]: debug: crm_new_peer: Creating entry for node Cluster-Server-2/2
Sep 10 15:21:19 Cluster-Server-1 cib: [48709]: info: crm_new_peer: Node Cluster-Server-2 now has id: 2
Sep 10 15:21:19 Cluster-Server-1 cib: [48709]: info: crm_new_peer: Node 2 is now known as Cluster-Server-2
Sep 10 15:21:19 Cluster-Server-1 cib: [48709]: debug: Forking temp process write_cib_contents
Sep 10 15:21:19 Cluster-Server-1 cib: [48709]: info: Managed write_cib_contents process 48725 exited with return code 0.
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: info: do_cib_control: CIB connection established
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_HA_CONNECT
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: info: get_cluster_type: Cluster type is: 'cman'
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: notice: crm_cluster_connect: Connecting to cluster infrastructure: cman
Sep 10 15:21:19 Cluster-Server-2 cib: [40192]: debug: cib_common_callback_worker: Setting cib_refresh_notify callbacks for 40197 (18d9948e-23f7-4556-bf9e-dcf6b666e639): on
Sep 10 15:21:19 Cluster-Server-2 cib: [40192]: debug: cib_common_callback_worker: Setting cib_diff_notify callbacks for 40197 (18d9948e-23f7-4556-bf9e-dcf6b666e639): on
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: debug: init_cpg_connection: Adding fd=7 to mainloop
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: info: get_local_node_name: Using CMAN node name: Cluster-Server-2
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: info: init_ais_connection_once: Connection to 'cman': established
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: debug: crm_new_peer: Creating entry for node Cluster-Server-2/2
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: info: crm_new_peer: Node Cluster-Server-2 now has id: 2
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: info: crm_new_peer: Node 2 is now known as Cluster-Server-2
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: info: ais_status_callback: status: Cluster-Server-2 is now unknown
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pcmk
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: info: init_cman_connection: Configuring Pacemaker to obtain quorum from cman
Sep 10 15:21:19 Cluster-Server-2 pacemakerd: [40187]: debug: pcmk_client_connect: Channel 0x1cbaed0 connected: 4 children
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: notice: cman_event_callback: Membership 312: quorum acquired
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: debug: crm_new_peer: Creating entry for node Cluster-Server-1/1
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: info: crm_new_peer: Node Cluster-Server-1 now has id: 1
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: info: crm_new_peer: Node 1 is now known as Cluster-Server-1
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: info: ais_status_callback: status: Cluster-Server-1 is now unknown
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: info: ais_status_callback: status: Cluster-Server-1 is now member (was unknown)
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: info: crm_update_peer: Node Cluster-Server-1: id=1 state=member (new) addr=(null) votes=0 born=312 seen=312 proc=00000000000000000000000000000000
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: info: ais_status_callback: status: Cluster-Server-2 is now member (was unknown)
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: info: crm_update_peer: Node Cluster-Server-2: id=2 state=member (new) addr=(null) votes=0 born=308 seen=312 proc=00000000000000000000000000000000
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: debug: post_cache_update: Updated cache after membership event 312.
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: debug: post_cache_update: post_cache_update added action A_ELECTION_CHECK to the FSA
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: debug: init_cman_connection: Adding fd=9 to mainloop
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: info: do_ha_control: Connected to the cluster
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_READCONFIG
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LRM_CONNECT
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: debug: do_lrm_control: Connecting to the LRM
Sep 10 15:21:19 Cluster-Server-2 lrmd: [40194]: debug: on_msg_register:client crmd [40197] registered
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: debug: do_lrm_control: LRM connection established
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CCM_CONNECT
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_STARTED
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: info: do_started: Delaying start, Config not read (0000000000000040)
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: debug: register_fsa_input_adv: Stalling the FSA pending further input: cause=C_FSA_INTERNAL
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Exiting the FSA: queue=0, fsa_actions=0x200000002, stalled=true
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_STARTED
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: info: do_started: Delaying start, Config not read (0000000000000040)
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: debug: register_fsa_input_adv: Stalling the FSA pending further input: cause=C_FSA_INTERNAL
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Exiting the FSA: queue=0, fsa_actions=0x200000002, stalled=true
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: debug: pcmk_cpg_membership: Member[0] 2 
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: notice: crmd_peer_update: Status update: Client Cluster-Server-2/crmd now has status [online] (DC=<null>)
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: debug: crm_update_peer: Node Cluster-Server-2: id=2 seen=312 proc=00000000000000000000000000111312 (new)
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_STARTED
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: info: do_started: Delaying start, Config not read (0000000000000040)
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: debug: register_fsa_input_adv: Stalling the FSA pending further input: cause=C_FSA_INTERNAL
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Exiting the FSA: queue=0, fsa_actions=0x200000002, stalled=true
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Call 3 : Parsing CIB options
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Shutdown escalation occurs after: 1200000ms
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Checking for expired actions every 900000ms
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_STARTED
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: debug: do_started: Init server comms
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: notice: do_started: The local CRM is operational
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: debug: do_election_check: Ignore election check: we not in an election
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PENDING: [ state=S_STARTING cause=C_FSA_INTERNAL origin=do_started ]
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_PENDING from do_started() received in state S_STARTING
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:21:19 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_QUERY
Sep 10 15:21:19 Cluster-Server-2 pacemakerd: [40187]: notice: update_node_processes: 0x1cb61e0 Node 1 now known as Cluster-Server-1, was: 
Sep 10 15:21:19 Cluster-Server-2 pacemakerd: [40187]: debug: update_node_processes: Node Cluster-Server-1 now has process list: 00000000000000000000000000000002 (was 00000000000000000000000000000000)
Sep 10 15:21:19 Cluster-Server-2 attrd: [40195]: debug: crm_new_peer: Creating entry for node Cluster-Server-1/1
Sep 10 15:21:19 Cluster-Server-2 stonith-ng: [40193]: debug: crm_new_peer: Creating entry for node Cluster-Server-1/1
Sep 10 15:21:19 Cluster-Server-2 attrd: [40195]: info: crm_new_peer: Node Cluster-Server-1 now has id: 1
Sep 10 15:21:19 Cluster-Server-2 attrd: [40195]: info: crm_new_peer: Node 1 is now known as Cluster-Server-1
Sep 10 15:21:19 Cluster-Server-2 stonith-ng: [40193]: info: crm_new_peer: Node Cluster-Server-1 now has id: 1
Sep 10 15:21:19 Cluster-Server-2 stonith-ng: [40193]: info: crm_new_peer: Node 1 is now known as Cluster-Server-1
Sep 10 15:21:19 Cluster-Server-2 cib: [40192]: debug: crm_new_peer: Creating entry for node Cluster-Server-1/1
Sep 10 15:21:19 Cluster-Server-2 cib: [40192]: info: crm_new_peer: Node Cluster-Server-1 now has id: 1
Sep 10 15:21:19 Cluster-Server-2 cib: [40192]: info: crm_new_peer: Node 1 is now known as Cluster-Server-1
Sep 10 15:21:19 Cluster-Server-2 pacemakerd: [40187]: debug: update_node_processes: Node Cluster-Server-1 now has process list: 00000000000000000000000000000102 (was 00000000000000000000000000000002)
Sep 10 15:21:19 Cluster-Server-2 attrd: [40195]: debug: crm_update_peer: Node Cluster-Server-1: id=1 seen=0 proc=00000000000000000000000000000102 (new)
Sep 10 15:21:19 Cluster-Server-2 cib: [40192]: debug: crm_update_peer: Node Cluster-Server-1: id=1 seen=0 proc=00000000000000000000000000000102 (new)
Sep 10 15:21:19 Cluster-Server-2 stonith-ng: [40193]: debug: crm_update_peer: Node Cluster-Server-1: id=1 seen=0 proc=00000000000000000000000000000102 (new)
Sep 10 15:21:19 Cluster-Server-2 pacemakerd: [40187]: debug: update_node_processes: Node Cluster-Server-1 now has process list: 00000000000000000000000000100102 (was 00000000000000000000000000000102)
Sep 10 15:21:19 Cluster-Server-2 attrd: [40195]: debug: crm_update_peer: Node Cluster-Server-1: id=1 seen=0 proc=00000000000000000000000000100102 (new)
Sep 10 15:21:19 Cluster-Server-2 cib: [40192]: debug: crm_update_peer: Node Cluster-Server-1: id=1 seen=0 proc=00000000000000000000000000100102 (new)
Sep 10 15:21:19 Cluster-Server-2 stonith-ng: [40193]: debug: crm_update_peer: Node Cluster-Server-1: id=1 seen=0 proc=00000000000000000000000000100102 (new)
Sep 10 15:21:19 Cluster-Server-2 pacemakerd: [40187]: debug: update_node_processes: Node Cluster-Server-1 now has process list: 00000000000000000000000000100112 (was 00000000000000000000000000100102)
Sep 10 15:21:19 Cluster-Server-2 attrd: [40195]: debug: crm_update_peer: Node Cluster-Server-1: id=1 seen=0 proc=00000000000000000000000000100112 (new)
Sep 10 15:21:19 Cluster-Server-2 stonith-ng: [40193]: debug: crm_update_peer: Node Cluster-Server-1: id=1 seen=0 proc=00000000000000000000000000100112 (new)
Sep 10 15:21:19 Cluster-Server-2 cib: [40192]: debug: crm_update_peer: Node Cluster-Server-1: id=1 seen=0 proc=00000000000000000000000000100112 (new)
Sep 10 15:21:19 Cluster-Server-2 pacemakerd: [40187]: debug: update_node_processes: Node Cluster-Server-1 now has process list: 00000000000000000000000000101112 (was 00000000000000000000000000100112)
Sep 10 15:21:19 Cluster-Server-2 stonith-ng: [40193]: debug: crm_update_peer: Node Cluster-Server-1: id=1 seen=0 proc=00000000000000000000000000101112 (new)
Sep 10 15:21:19 Cluster-Server-2 cib: [40192]: debug: crm_update_peer: Node Cluster-Server-1: id=1 seen=0 proc=00000000000000000000000000101112 (new)
Sep 10 15:21:19 Cluster-Server-2 attrd: [40195]: debug: crm_update_peer: Node Cluster-Server-1: id=1 seen=0 proc=00000000000000000000000000101112 (new)
Sep 10 15:21:19 Cluster-Server-2 pacemakerd: [40187]: debug: update_node_processes: Node Cluster-Server-1 now has process list: 00000000000000000000000000111112 (was 00000000000000000000000000101112)
Sep 10 15:21:19 Cluster-Server-2 stonith-ng: [40193]: debug: crm_update_peer: Node Cluster-Server-1: id=1 seen=0 proc=00000000000000000000000000111112 (new)
Sep 10 15:21:19 Cluster-Server-2 cib: [40192]: debug: crm_update_peer: Node Cluster-Server-1: id=1 seen=0 proc=00000000000000000000000000111112 (new)
Sep 10 15:21:19 Cluster-Server-2 attrd: [40195]: debug: crm_update_peer: Node Cluster-Server-1: id=1 seen=0 proc=00000000000000000000000000111112 (new)
Sep 10 15:21:19 Cluster-Server-2 pacemakerd: [40187]: debug: update_node_processes: Node Cluster-Server-1 now has process list: 00000000000000000000000000111312 (was 00000000000000000000000000111112)
Sep 10 15:21:19 Cluster-Server-2 attrd: [40195]: debug: crm_update_peer: Node Cluster-Server-1: id=1 seen=0 proc=00000000000000000000000000111312 (new)
Sep 10 15:21:19 Cluster-Server-2 cib: [40192]: debug: crm_update_peer: Node Cluster-Server-1: id=1 seen=0 proc=00000000000000000000000000111312 (new)
Sep 10 15:21:19 Cluster-Server-2 stonith-ng: [40193]: debug: crm_update_peer: Node Cluster-Server-1: id=1 seen=0 proc=00000000000000000000000000111312 (new)
Sep 10 15:21:19 Cluster-Server-2 stonith-ng: [40193]: debug: pcmk_cpg_membership: Member[0] 1 
Sep 10 15:21:19 Cluster-Server-2 stonith-ng: [40193]: debug: pcmk_cpg_membership: Member[1] 2 
Sep 10 15:21:19 Cluster-Server-2 attrd: [40195]: debug: pcmk_cpg_membership: Member[0] 1 
Sep 10 15:21:19 Cluster-Server-2 attrd: [40195]: debug: pcmk_cpg_membership: Member[1] 2 
Sep 10 15:21:19 Cluster-Server-2 cib: [40192]: debug: pcmk_cpg_membership: Member[0] 1 
Sep 10 15:21:19 Cluster-Server-2 cib: [40192]: debug: pcmk_cpg_membership: Member[1] 2 
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: info: do_cib_control: CIB connection established
Sep 10 15:21:20 Cluster-Server-1 cib: [48709]: debug: cib_common_callback_worker: Setting cib_refresh_notify callbacks for 48715 (e00c5ff0-ca9f-4fba-a6df-4b13f36ff633): on
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_HA_CONNECT
Sep 10 15:21:20 Cluster-Server-1 cib: [48709]: debug: cib_common_callback_worker: Setting cib_diff_notify callbacks for 48715 (e00c5ff0-ca9f-4fba-a6df-4b13f36ff633): on
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: info: get_cluster_type: Cluster type is: 'cman'
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: notice: crm_cluster_connect: Connecting to cluster infrastructure: cman
Sep 10 15:21:20 Cluster-Server-1 stonith-ng: [48710]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:21:20 Cluster-Server-1 stonith-ng: [48710]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:21:20 Cluster-Server-1 stonith-ng: [48710]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:21:20 Cluster-Server-1 stonith-ng: [48710]: notice: setup_cib: Watching for stonith topology changes
Sep 10 15:21:20 Cluster-Server-1 stonith-ng: [48710]: info: main: Starting stonith-ng mainloop
Sep 10 15:21:20 Cluster-Server-1 stonith-ng: [48710]: debug: pcmk_cpg_membership: Member[0] 1 
Sep 10 15:21:20 Cluster-Server-1 stonith-ng: [48710]: debug: pcmk_cpg_membership: Member[1] 2 
Sep 10 15:21:20 Cluster-Server-1 stonith-ng: [48710]: debug: crm_update_peer: Node Cluster-Server-1: id=1 seen=0 proc=00000000000000000000000000111312 (new)
Sep 10 15:21:20 Cluster-Server-1 stonith-ng: [48710]: debug: crm_new_peer: Creating entry for node Cluster-Server-2/2
Sep 10 15:21:20 Cluster-Server-1 stonith-ng: [48710]: info: crm_new_peer: Node Cluster-Server-2 now has id: 2
Sep 10 15:21:20 Cluster-Server-1 stonith-ng: [48710]: info: crm_new_peer: Node 2 is now known as Cluster-Server-2
Sep 10 15:21:20 Cluster-Server-1 cib: [48709]: debug: cib_common_callback_worker: Setting cib_diff_notify callbacks for 48710 (928f7513-3a99-4c91-aada-ea04881ab5f4): on
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: debug: init_cpg_connection: Adding fd=7 to mainloop
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: info: get_local_node_name: Using CMAN node name: Cluster-Server-1
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: info: init_ais_connection_once: Connection to 'cman': established
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: debug: crm_new_peer: Creating entry for node Cluster-Server-1/1
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: info: crm_new_peer: Node Cluster-Server-1 now has id: 1
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: info: crm_new_peer: Node 1 is now known as Cluster-Server-1
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: info: ais_status_callback: status: Cluster-Server-1 is now unknown
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pcmk
Sep 10 15:21:20 Cluster-Server-1 pacemakerd: [48700]: debug: pcmk_client_connect: Channel 0x17ec9f0 connected: 4 children
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: info: init_cman_connection: Configuring Pacemaker to obtain quorum from cman
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: notice: cman_event_callback: Membership 312: quorum acquired
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: info: ais_status_callback: status: Cluster-Server-1 is now member (was unknown)
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: info: crm_update_peer: Node Cluster-Server-1: id=1 state=member (new) addr=(null) votes=0 born=312 seen=312 proc=00000000000000000000000000000000
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: debug: crm_new_peer: Creating entry for node Cluster-Server-2/2
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: info: crm_new_peer: Node Cluster-Server-2 now has id: 2
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: info: crm_new_peer: Node 2 is now known as Cluster-Server-2
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: info: ais_status_callback: status: Cluster-Server-2 is now unknown
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: info: ais_status_callback: status: Cluster-Server-2 is now member (was unknown)
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: info: crm_update_peer: Node Cluster-Server-2: id=2 state=member (new) addr=(null) votes=0 born=312 seen=312 proc=00000000000000000000000000000000
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: debug: post_cache_update: Updated cache after membership event 312.
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: debug: post_cache_update: post_cache_update added action A_ELECTION_CHECK to the FSA
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: debug: init_cman_connection: Adding fd=9 to mainloop
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: info: do_ha_control: Connected to the cluster
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_READCONFIG
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_LRM_CONNECT
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: debug: do_lrm_control: Connecting to the LRM
Sep 10 15:21:20 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client crmd [48715] registered
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: debug: do_lrm_control: LRM connection established
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_CCM_CONNECT
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_STARTED
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: info: do_started: Delaying start, Config not read (0000000000000040)
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: debug: register_fsa_input_adv: Stalling the FSA pending further input: cause=C_FSA_INTERNAL
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Exiting the FSA: queue=0, fsa_actions=0x200000002, stalled=true
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_STARTED
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: info: do_started: Delaying start, Config not read (0000000000000040)
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: debug: register_fsa_input_adv: Stalling the FSA pending further input: cause=C_FSA_INTERNAL
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Exiting the FSA: queue=0, fsa_actions=0x200000002, stalled=true
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: debug: config_query_callback: Call 3 : Parsing CIB options
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: debug: config_query_callback: Shutdown escalation occurs after: 1200000ms
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: debug: config_query_callback: Checking for expired actions every 900000ms
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: debug: pcmk_cpg_membership: Member[0] 1 
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: debug: pcmk_cpg_membership: Member[1] 2 
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: notice: crmd_peer_update: Status update: Client Cluster-Server-1/crmd now has status [online] (DC=<null>)
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: debug: crm_update_peer: Node Cluster-Server-1: id=1 seen=312 proc=00000000000000000000000000111312 (new)
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: notice: crmd_peer_update: Status update: Client Cluster-Server-2/crmd now has status [online] (DC=<null>)
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: debug: crm_update_peer: Node Cluster-Server-2: id=2 seen=312 proc=00000000000000000000000000111312 (new)
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_STARTED
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: debug: do_started: Init server comms
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: notice: do_started: The local CRM is operational
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: debug: do_election_check: Ignore election check: we not in an election
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_PENDING: [ state=S_STARTING cause=C_FSA_INTERNAL origin=do_started ]
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: debug: do_log: FSA: Input I_PENDING from do_started() received in state S_STARTING
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: notice: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:21:20 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_QUERY
Sep 10 15:21:20 Cluster-Server-2 crmd: [40197]: debug: do_cl_join_query: Querying for a DC
Sep 10 15:21:20 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_START
Sep 10 15:21:20 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started Election Trigger (I_DC_TIMEOUT:20000ms), src=14
Sep 10 15:21:20 Cluster-Server-2 crmd: [40197]: debug: crm_update_peer: Node Cluster-Server-1: id=1 seen=312 proc=00000000000000000000000000000002 (new)
Sep 10 15:21:20 Cluster-Server-2 crmd: [40197]: debug: crm_update_peer: Node Cluster-Server-1: id=1 seen=312 proc=00000000000000000000000000000102 (new)
Sep 10 15:21:20 Cluster-Server-2 crmd: [40197]: debug: crm_update_peer: Node Cluster-Server-1: id=1 seen=312 proc=00000000000000000000000000100102 (new)
Sep 10 15:21:20 Cluster-Server-2 crmd: [40197]: debug: crm_update_peer: Node Cluster-Server-1: id=1 seen=312 proc=00000000000000000000000000100112 (new)
Sep 10 15:21:20 Cluster-Server-2 crmd: [40197]: debug: crm_update_peer: Node Cluster-Server-1: id=1 seen=312 proc=00000000000000000000000000101112 (new)
Sep 10 15:21:20 Cluster-Server-2 crmd: [40197]: debug: crm_update_peer: Node Cluster-Server-1: id=1 seen=312 proc=00000000000000000000000000111112 (new)
Sep 10 15:21:20 Cluster-Server-2 crmd: [40197]: notice: crmd_peer_update: Status update: Client Cluster-Server-1/crmd now has status [online] (DC=<null>)
Sep 10 15:21:20 Cluster-Server-2 crmd: [40197]: debug: crm_update_peer: Node Cluster-Server-1: id=1 seen=312 proc=00000000000000000000000000111312 (new)
Sep 10 15:21:20 Cluster-Server-2 crmd: [40197]: debug: te_connect_stonith: Attempting connection to fencing daemon...
Sep 10 15:21:21 Cluster-Server-1 crmd: [48715]: debug: do_cl_join_query: Querying for a DC
Sep 10 15:21:21 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_START
Sep 10 15:21:21 Cluster-Server-1 crmd: [48715]: debug: crm_timer_start: Started Election Trigger (I_DC_TIMEOUT:20000ms), src=14
Sep 10 15:21:21 Cluster-Server-1 crmd: [48715]: debug: te_connect_stonith: Attempting connection to fencing daemon...
Sep 10 15:21:21 Cluster-Server-2 crmd: [40197]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/st_command
Sep 10 15:21:21 Cluster-Server-2 crmd: [40197]: debug: get_stonith_token: Obtained registration token: 4f26b7b0-59c4-4c67-9b7f-b931ac3df9f9
Sep 10 15:21:21 Cluster-Server-2 crmd: [40197]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/st_callback
Sep 10 15:21:21 Cluster-Server-2 crmd: [40197]: debug: get_stonith_token: Obtained registration token: ce26c554-abf2-4333-82aa-9f8b28e18a7c
Sep 10 15:21:21 Cluster-Server-2 crmd: [40197]: debug: stonith_api_signon: Connection to STONITH successful
Sep 10 15:21:21 Cluster-Server-2 crmd: [40197]: debug: pcmk_cpg_membership: Member[0] 1 
Sep 10 15:21:21 Cluster-Server-2 crmd: [40197]: debug: pcmk_cpg_membership: Member[1] 2 
Sep 10 15:21:21 Cluster-Server-2 stonith-ng: [40193]: debug: stonith_command: Processing register from crmd (               0)
Sep 10 15:21:21 Cluster-Server-2 stonith-ng: [40193]: debug: stonith_command: Processing st_notify from 40197 (               0)
Sep 10 15:21:21 Cluster-Server-2 stonith-ng: [40193]: debug: stonith_command: Setting st_notify_disconnect callbacks for 40197 (ce26c554-abf2-4333-82aa-9f8b28e18a7c): ON
Sep 10 15:21:21 Cluster-Server-2 stonith-ng: [40193]: debug: stonith_command: Processing st_notify from 40197 (               0)
Sep 10 15:21:21 Cluster-Server-2 stonith-ng: [40193]: debug: stonith_command: Setting st_fence callbacks for 40197 (ce26c554-abf2-4333-82aa-9f8b28e18a7c): ON
Sep 10 15:21:22 Cluster-Server-1 crmd: [48715]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/st_command
Sep 10 15:21:22 Cluster-Server-1 crmd: [48715]: debug: get_stonith_token: Obtained registration token: 9694a93b-1cd0-4228-a8b3-b6ead5d6ff79
Sep 10 15:21:22 Cluster-Server-1 crmd: [48715]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/st_callback
Sep 10 15:21:22 Cluster-Server-1 crmd: [48715]: debug: get_stonith_token: Obtained registration token: f0678716-b283-4a01-8e60-13b1b327f16e
Sep 10 15:21:22 Cluster-Server-1 crmd: [48715]: debug: stonith_api_signon: Connection to STONITH successful
Sep 10 15:21:22 Cluster-Server-1 stonith-ng: [48710]: debug: stonith_command: Processing register from crmd (               0)
Sep 10 15:21:22 Cluster-Server-1 stonith-ng: [48710]: debug: stonith_command: Processing st_notify from 48715 (               0)
Sep 10 15:21:22 Cluster-Server-1 stonith-ng: [48710]: debug: stonith_command: Setting st_notify_disconnect callbacks for 48715 (f0678716-b283-4a01-8e60-13b1b327f16e): ON
Sep 10 15:21:22 Cluster-Server-1 stonith-ng: [48710]: debug: stonith_command: Processing st_notify from 48715 (               0)
Sep 10 15:21:22 Cluster-Server-1 stonith-ng: [48710]: debug: stonith_command: Setting st_fence callbacks for 48715 (f0678716-b283-4a01-8e60-13b1b327f16e): ON
Sep 10 15:21:23 Cluster-Server-2 attrd: [40195]: debug: cib_connect: CIB signon attempt 1
Sep 10 15:21:23 Cluster-Server-2 attrd: [40195]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:21:23 Cluster-Server-2 attrd: [40195]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:21:23 Cluster-Server-2 attrd: [40195]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:21:23 Cluster-Server-2 attrd: [40195]: info: cib_connect: Connected to the CIB after 1 signon attempts
Sep 10 15:21:23 Cluster-Server-2 attrd: [40195]: info: cib_connect: Sending full refresh
Sep 10 15:21:23 Cluster-Server-2 cib: [40192]: debug: cib_common_callback_worker: Setting cib_refresh_notify callbacks for 40195 (1a9465eb-3a95-40ab-8d91-6918e3ee908a): on
Sep 10 15:21:24 Cluster-Server-1 attrd: [48713]: debug: cib_connect: CIB signon attempt 1
Sep 10 15:21:24 Cluster-Server-1 attrd: [48713]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:21:24 Cluster-Server-1 attrd: [48713]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:21:24 Cluster-Server-1 attrd: [48713]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:21:24 Cluster-Server-1 attrd: [48713]: info: cib_connect: Connected to the CIB after 1 signon attempts
Sep 10 15:21:24 Cluster-Server-1 attrd: [48713]: info: cib_connect: Sending full refresh
Sep 10 15:21:24 Cluster-Server-1 cib: [48709]: debug: cib_common_callback_worker: Setting cib_refresh_notify callbacks for 48713 (e8e0717f-6a38-4c76-a1f9-27b7b5c96e9c): on
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: do_election_count_vote: Created voted hash
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: crm_uptime: Current CPU usage is: 0s, 10000us
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: info: do_election_count_vote: Election 2 (owner: Cluster-Server-2) pass: vote from Cluster-Server-2 (Host name)
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: do_election_check: Ignore election check: we not in an election
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_PENDING cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: notice: do_state_transition: State transition S_PENDING -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: crm_uptime: Current CPU usage is: 0s, 10000us
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: do_election_vote: Started election 2
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: crm_timer_start: Started Election Timeout (I_ELECTION_DC:120000ms), src=16
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: do_election_count_vote: Created voted hash
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: crm_uptime: Current CPU usage is: 0s, 10000us
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: do_election_count_vote: Election 2 (current: 2, owner: Cluster-Server-1): Processed vote from Cluster-Server-1 (Recorded)
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: crm_uptime: Current CPU usage is: 0s, 10000us
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: crm_compare_age: Loose: 10000 vs 20000 (usec)
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: info: do_election_count_vote: Election 3 (owner: Cluster-Server-2) lost: vote from Cluster-Server-2 (Uptime)
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_PENDING: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: notice: do_state_transition: State transition S_ELECTION -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_START
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: crm_timer_start: Started Election Trigger (I_DC_TIMEOUT:20000ms), src=17
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_RELEASE
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: do_dc_release: Releasing the role of DC
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_RELEASED
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: info: do_dc_release: DC role released
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_PE_STOP
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_TE_STOP
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: cib_client_del_notify_callback: Removing callback for cib_diff_notify events
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: info: do_te_control: Transitioner is now inactive
Sep 10 15:21:40 Cluster-Server-1 cib: [48709]: debug: cib_common_callback_worker: Setting cib_diff_notify callbacks for 48715 (e00c5ff0-ca9f-4fba-a6df-4b13f36ff633): off
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_RELEASE_SUCCESS: [ state=S_PENDING cause=C_FSA_INTERNAL origin=do_dc_release ]
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: do_log: FSA: Input I_RELEASE_SUCCESS from do_dc_release() received in state S_PENDING
Sep 10 15:21:40 Cluster-Server-1 cib: [48709]: debug: activateCibXml: Triggering CIB write for cib_apply_diff op
Sep 10 15:21:40 Cluster-Server-1 cib: [48709]: debug: Forking temp process write_cib_contents
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: handle_request: Raising I_JOIN_OFFER: join-1
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_JOIN_OFFER: [ state=S_PENDING cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_REQUEST
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: info: update_dc: Set DC to Cluster-Server-2 (3.0.6)
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: do_cl_join_offer_respond: do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:21:40 Cluster-Server-1 cib: [48709]: debug: activateCibXml: Triggering CIB write for cib_apply_diff op
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: join_query_callback: Respond to join offer join-1
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: join_query_callback: Acknowledging Cluster-Server-2 as our DC
Sep 10 15:21:40 Cluster-Server-1 cib: [48709]: debug: activateCibXml: Triggering CIB write for cib_replace op
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: handle_request: Raising I_JOIN_RESULT: join-1
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_PENDING cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: do_cl_join_finalize_respond: Confirming join join-1: join_ack_nack
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: do_cl_join_finalize_respond: join-1: Join complete.  Sending local LRM status to Cluster-Server-2
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: info: erase_status_tag: Deleting xpath: //node_state[@uname='Cluster-Server-1']/transient_attributes
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: info: update_attrd: Connecting to attrd...
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:21:40 Cluster-Server-1 cib: [48709]: debug: activateCibXml: Triggering CIB write for cib_apply_diff op
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: attrd_update_delegate: Sent update: terminate=(null) for Cluster-Server-1
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: attrd_update_delegate: Sent update: shutdown=(null) for Cluster-Server-1
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: attrd_update_delegate: Sent update: (null)=(null) for localhost
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_NOT_DC: [ state=S_PENDING cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
Sep 10 15:21:40 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crmd: terminate=<null>
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:21:40 Cluster-Server-1 attrd: [48713]: info: find_hash_entry: Creating hash entry for terminate
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: do_log: FSA: Input I_NOT_DC from do_cl_join_finalize_respond() received in state S_PENDING
Sep 10 15:21:40 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: (null), Current: (null), Stored: (null)
Sep 10 15:21:40 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crmd: shutdown=<null>
Sep 10 15:21:40 Cluster-Server-1 attrd: [48713]: info: find_hash_entry: Creating hash entry for shutdown
Sep 10 15:21:40 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: (null), Current: (null), Stored: (null)
Sep 10 15:21:40 Cluster-Server-1 attrd: [48713]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: notice: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:21:40 Cluster-Server-1 cib: [48709]: debug: activateCibXml: Triggering CIB write for cib_apply_diff op
Sep 10 15:21:40 Cluster-Server-1 cib: [48709]: info: Managed write_cib_contents process 49222 exited with return code 0.
Sep 10 15:21:40 Cluster-Server-1 cib: [48709]: debug: Forking temp process write_cib_contents
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: erase_xpath_callback: Deletion of "//node_state[@uname='Cluster-Server-1']/transient_attributes": ok (rc=0)
Sep 10 15:21:40 Cluster-Server-1 crmd: [48715]: debug: attrd_update_delegate: Sent update: probe_complete=true for localhost
Sep 10 15:21:40 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crmd: probe_complete=true
Sep 10 15:21:40 Cluster-Server-1 attrd: [48713]: info: find_hash_entry: Creating hash entry for probe_complete
Sep 10 15:21:40 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: true, Current: (null), Stored: (null)
Sep 10 15:21:40 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: New value of probe_complete is true
Sep 10 15:21:40 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Sep 10 15:21:40 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='probe_complete'] does not exist
Sep 10 15:21:40 Cluster-Server-1 cib: [48709]: info: Managed write_cib_contents process 49223 exited with return code 0.
Sep 10 15:21:40 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for /cib (/cib)
Sep 10 15:21:40 Cluster-Server-1 attrd: [48713]: notice: attrd_perform_update: Sent update 4: probe_complete=true
Sep 10 15:21:40 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='probe_complete'] does not exist
Sep 10 15:21:40 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for /cib (/cib)
Sep 10 15:21:40 Cluster-Server-1 attrd: [48713]: notice: attrd_perform_update: Sent update 7: probe_complete=true
Sep 10 15:21:40 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 4 for probe_complete=true passed
Sep 10 15:21:40 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 7 for probe_complete=true passed
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: info: crm_timer_popped: Election Trigger (I_DC_TIMEOUT) just popped (20000ms)
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_DC_TIMEOUT: [ state=S_PENDING cause=C_TIMER_POPPED origin=crm_timer_popped ]
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_WARN  
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: WARN: do_log: FSA: Input I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_PENDING -> S_ELECTION [ input=I_DC_TIMEOUT cause=C_TIMER_POPPED origin=crm_timer_popped ]
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: crm_uptime: Current CPU usage is: 0s, 10000us
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_election_vote: Started election 2
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started Election Timeout (I_ELECTION_DC:120000ms), src=16
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_election_count_vote: Created voted hash
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: crm_uptime: Current CPU usage is: 0s, 20000us
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: crm_compare_age: Win: 20000 vs 10000  (usec)
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_election_count_vote: Election 2 (current: 2, owner: Cluster-Server-2): Processed vote from Cluster-Server-2 (Recorded)
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: crm_uptime: Current CPU usage is: 0s, 20000us
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: crm_compare_age: Win: 20000 vs 10000  (usec)
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: info: do_election_count_vote: Election 2 (owner: Cluster-Server-1) pass: vote from Cluster-Server-1 (Uptime)
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_election_check: Still waiting on 2 non-votes (2 total)
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: crm_uptime: Current CPU usage is: 0s, 20000us
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_election_vote: Started election 3
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Election Timeout (I_ELECTION_DC:120000ms) already running: src=16
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_election_count_vote: Created voted hash
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: crm_uptime: Current CPU usage is: 0s, 20000us
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_election_count_vote: Election 3 (current: 3, owner: Cluster-Server-2): Processed vote from Cluster-Server-2 (Recorded)
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: crm_uptime: Current CPU usage is: 0s, 20000us
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: crm_compare_age: Win: 20000 vs 0  (usec)
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_election_count_vote: Election 3 (current: 3, owner: Cluster-Server-2): Processed no-vote from Cluster-Server-1 (Recorded)
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_election_check: Destroying voted hash
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_ELECTION_DC: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_election_check ]
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_ELECTION_DC from do_election_check() received in state S_ELECTION
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_START
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: info: do_te_control: Registering TE UUID: 81b7c738-e2a4-46c6-91bd-4df2c9c62d66
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: info: set_graph_functions: Setting custom graph functions
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_te_control: Transitioner is now active
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: unpack_graph: Unpacked transition -1: 0 actions in 0 synapses
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_START
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pengine
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: debug: cib_common_callback_worker: Setting cib_diff_notify callbacks for 40197 (18d9948e-23f7-4556-bf9e-dcf6b666e639): on
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_START
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started Integration Timer (I_INTEGRATED:180000ms), src=19
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TAKEOVER
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: info: do_dc_takeover: Taking over DC status for this partition
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib_process_readwrite: We are now in R/W mode
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/5, version=0.0.1): ok (rc=0)
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/6, version=0.0.2): ok (rc=0)
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: cib_query: //cib/configuration/crm_config//cluster_property_set//nvpair[@name='dc-version'] does not exist
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for /cib (/cib)
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: debug: activateCibXml: Triggering CIB write for cib_modify op
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib:diff: - <cib admin_epoch="0" epoch="0" num_updates="2" />
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib:diff: + <cib epoch="1" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" >
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib:diff: +   <configuration >
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib:diff: +     <crm_config >
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <cluster_property_set id="cib-bootstrap-options" __crm_diff_marker__="added:top" >
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib:diff: +         <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff" />
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib:diff: +       </cluster_property_set>
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib:diff: +     </crm_config>
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib:diff: +   </configuration>
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib:diff: + </cib>
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/9, version=0.1.1): ok (rc=0)
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: cib_query: //cib/configuration/crm_config//cluster_property_set//nvpair[@name='cluster-infrastructure'] does not exist
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for /cib (/cib)
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: debug: Forking temp process write_cib_contents
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_OFFER_ALL
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: initialize_join: join-1: Initializing join data (flag=true)
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: info: join_make_offer: Making join offers based on membership 312
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: join_make_offer: join-1: Sending offer to Cluster-Server-1
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: join_make_offer: join-1: Sending offer to Cluster-Server-2
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: info: do_dc_join_offer_all: join-1: Waiting on 2 outstanding join acks
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: debug: activateCibXml: Triggering CIB write for cib_modify op
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib:diff: - <cib admin_epoch="0" epoch="1" num_updates="1" />
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib:diff: + <cib epoch="2" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="Cluster-Server-2" update-client="crmd" cib-last-written="Mon Sep 10 15:21:40 2012" >
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib:diff: +   <configuration >
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib:diff: +     <crm_config >
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <cluster_property_set id="cib-bootstrap-options" >
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib:diff: +         <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="cman" __crm_diff_marker__="added:top" />
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib:diff: +       </cluster_property_set>
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib:diff: +     </crm_config>
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib:diff: +   </configuration>
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib:diff: + </cib>
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/12, version=0.2.1): ok (rc=0)
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: handle_request: Raising I_JOIN_OFFER: join-1
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_OFFER: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_REQUEST
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: info: update_dc: Set DC to Cluster-Server-2 (3.0.6)
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_cl_join_offer_respond: do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Call 13 : Parsing CIB options
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Shutdown escalation occurs after: 1200000ms
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Checking for expired actions every 900000ms
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Call 14 : Parsing CIB options
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Shutdown escalation occurs after: 1200000ms
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Checking for expired actions every 900000ms
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: join_query_callback: Respond to join offer join-1
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: join_query_callback: Acknowledging Cluster-Server-2 as our DC
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Call 16 : Parsing CIB options
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Shutdown escalation occurs after: 1200000ms
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Checking for expired actions every 900000ms
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_REQ
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: Processing req from Cluster-Server-2
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: join-1: Welcoming node Cluster-Server-2 (ref join_request-crmd-1347283300-6)
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: 1 nodes have been integrated into join-1
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: join-1: Still waiting on 1 outstanding offers
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_REQ
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: Processing req from Cluster-Server-1
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: join-1: Welcoming node Cluster-Server-1 (ref join_request-crmd-1347283300-4)
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: 2 nodes have been integrated into join-1
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: check_join_state: join-1: Integration of 2 peers complete: do_dc_join_filter_offer
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_INTEGRATED: [ state=S_INTEGRATION cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes responded to the join offer.
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_START
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started Finalization Timer (I_ELECTION:1800000ms), src=25
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_FINALIZE
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_finalize: Finializing join-1 for 2 clients
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: info: do_dc_join_finalize: join-1: Syncing the CIB from Cluster-Server-2 to the rest of the cluster
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: debug: sync_our_cib: Syncing CIB to all peers
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/17, version=0.2.1): ok (rc=0)
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by finalize_sync_callback in state: S_FINALIZE_JOIN
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: check_join_state: join-1: Still waiting on 2 integrated nodes
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: finalize_sync_callback: Notifying 2 clients of join-1 results
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: finalize_join_for: join-1: ACK'ing join request from Cluster-Server-1, state member
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: finalize_join_for: join-1: ACK'ing join request from Cluster-Server-2, state member
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: debug: activateCibXml: Triggering CIB write for cib_modify op
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib:diff: - <cib admin_epoch="0" epoch="2" num_updates="1" />
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib:diff: + <cib epoch="3" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="Cluster-Server-2" update-client="crmd" cib-last-written="Mon Sep 10 15:21:40 2012" >
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib:diff: +   <configuration >
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib:diff: +     <nodes >
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <node id="Cluster-Server-1" uname="Cluster-Server-1" type="normal" __crm_diff_marker__="added:top" />
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib:diff: +     </nodes>
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib:diff: +   </configuration>
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib:diff: + </cib>
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/18, version=0.3.1): ok (rc=0)
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: debug: activateCibXml: Triggering CIB write for cib_modify op
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib:diff: - <cib admin_epoch="0" epoch="3" num_updates="1" />
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib:diff: + <cib epoch="4" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="Cluster-Server-2" update-client="crmd" cib-last-written="Mon Sep 10 15:21:40 2012" >
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib:diff: +   <configuration >
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib:diff: +     <nodes >
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <node id="Cluster-Server-2" uname="Cluster-Server-2" type="normal" __crm_diff_marker__="added:top" />
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib:diff: +     </nodes>
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib:diff: +   </configuration>
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib:diff: + </cib>
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/19, version=0.4.1): ok (rc=0)
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: Managed write_cib_contents process 42437 exited with return code 0.
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: debug: Forking temp process write_cib_contents
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: handle_request: Raising I_JOIN_RESULT: join-1
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_cl_join_finalize_respond: Confirming join join-1: join_ack_nack
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_cl_join_finalize_respond: join-1: Join complete.  Sending local LRM status to Cluster-Server-2
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: info: erase_status_tag: Deleting xpath: //node_state[@uname='Cluster-Server-2']/transient_attributes
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: info: update_attrd: Connecting to attrd...
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:21:40 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from crmd: terminate=<null>
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: attrd_update_delegate: Sent update: terminate=(null) for Cluster-Server-2
Sep 10 15:21:40 Cluster-Server-2 attrd: [40195]: info: find_hash_entry: Creating hash entry for terminate
Sep 10 15:21:40 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: (null), Current: (null), Stored: (null)
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: attrd_update_delegate: Sent update: shutdown=(null) for Cluster-Server-2
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: //node_state[@uname='Cluster-Server-2']/transient_attributes was already removed
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_ack: Ignoring op=join_ack_nack message from Cluster-Server-2
Sep 10 15:21:40 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from crmd: shutdown=<null>
Sep 10 15:21:40 Cluster-Server-2 attrd: [40195]: info: find_hash_entry: Creating hash entry for shutdown
Sep 10 15:21:40 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: (null), Current: (null), Stored: (null)
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: info: do_dc_join_ack: join-1: Updating node state to member for Cluster-Server-1
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: info: erase_status_tag: Deleting xpath: //node_state[@uname='Cluster-Server-1']/lrm
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-2']/transient_attributes (origin=local/crmd/20, version=0.4.2): ok (rc=0)
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: //node_state[@uname='Cluster-Server-1']/lrm was already removed
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_ack: join-1: Registered callback for LRM update 22
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: info: do_dc_join_ack: join-1: Updating node state to member for Cluster-Server-2
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: info: erase_status_tag: Deleting xpath: //node_state[@uname='Cluster-Server-2']/lrm
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_ack: join-1: Registered callback for LRM update 24
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-1']/lrm (origin=local/crmd/21, version=0.4.3): ok (rc=0)
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: erase_xpath_callback: Deletion of "//node_state[@uname='Cluster-Server-2']/transient_attributes": ok (rc=0)
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: erase_xpath_callback: Deletion of "//node_state[@uname='Cluster-Server-1']/lrm": ok (rc=0)
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: join_update_complete_callback: Join update 22 complete
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by join_update_complete_callback in state: S_FINALIZE_JOIN
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: check_join_state: join-1 complete: join_update_complete_callback
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_FINALIZED: [ state=S_FINALIZE_JOIN cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-1: true (overwrite=true) hash_size=2
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-2: true (overwrite=true) hash_size=2
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes are eligible to run resources.
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_FINAL
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_final: Ensuring DC, quorum and node attributes are up-to-date
Sep 10 15:21:40 Cluster-Server-2 attrd: [40195]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: attrd_update_delegate: Sent update: (null)=(null) for localhost
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: crm_update_quorum: Updating quorum status to true (call=27)
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: //node_state[@uname='Cluster-Server-2']/lrm was already removed
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_CANCEL
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_te_invoke: Cancelling the transition: inactive
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: do_te_invoke:162 - Triggered transition abort (complete=1) : Peer Cancelled
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 28: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-2']/lrm (origin=local/crmd/23, version=0.4.5): ok (rc=0)
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_delete): 0.4.4 -> 0.4.5 (S_POLICY_ENGINE)
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: erase_xpath_callback: Deletion of "//node_state[@uname='Cluster-Server-2']/lrm": ok (rc=0)
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.4.5 -> 0.4.6 (S_POLICY_ENGINE)
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: join_update_complete_callback: Join update 24 complete
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by join_update_complete_callback in state: S_POLICY_ENGINE
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: Managed write_cib_contents process 42438 exited with return code 0.
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: //node_state[@uname='Cluster-Server-1']/transient_attributes was already removed
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-1']/transient_attributes (origin=Cluster-Server-1/crmd/8, version=0.4.7): ok (rc=0)
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_delete): 0.4.6 -> 0.4.7 (S_POLICY_ENGINE)
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/25, version=0.4.8): ok (rc=0)
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.4.7 -> 0.4.8 (S_POLICY_ENGINE)
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.4.8 -> 0.4.9 (S_POLICY_ENGINE)
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/27, version=0.4.10): ok (rc=0)
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.4.9 -> 0.4.10 (S_POLICY_ENGINE)
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke_callback: Invoking the PE: query=28, ref=pe_calc-dc-1347283300-10, seq=312, quorate=1
Sep 10 15:21:40 Cluster-Server-2 pengine: [40196]: info: unpack_config: Startup probes: enabled
Sep 10 15:21:40 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH timeout: 60000
Sep 10 15:21:40 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH of failed nodes is enabled
Sep 10 15:21:40 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Stop all active resources: false
Sep 10 15:21:40 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Sep 10 15:21:40 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Default stickiness: 0
Sep 10 15:21:40 Cluster-Server-2 pengine: [40196]: debug: unpack_config: On loss of CCM Quorum: Stop ALL resources
Sep 10 15:21:40 Cluster-Server-2 pengine: [40196]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Sep 10 15:21:40 Cluster-Server-2 pengine: [40196]: info: unpack_domains: Unpacking domains
Sep 10 15:21:40 Cluster-Server-2 pengine: [40196]: ERROR: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
Sep 10 15:21:40 Cluster-Server-2 pengine: [40196]: ERROR: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
Sep 10 15:21:40 Cluster-Server-2 pengine: [40196]: ERROR: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
Sep 10 15:21:40 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-1 is online
Sep 10 15:21:40 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-2 is online
Sep 10 15:21:40 Cluster-Server-2 pengine: [40196]: notice: stage6: Delaying fencing operations until there are resources to manage
Sep 10 15:21:40 Cluster-Server-2 pengine: [40196]: debug: get_last_sequence: Series file /var/lib/pengine/pe-input.last does not exist
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_PE_SUCCESS from handle_response() received in state S_POLICY_ENGINE
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log <create_reply_adv origin="process_pe_message" t="crmd" version="3.0.6" subt="response" reference="pe_calc-dc-1347283300-10" crm_task="pe_calc" crm_sys_to="dc" crm_sys_from="pengine" crm-tgraph-in="/var/lib/pengine/pe-input-0.bz2" graph-errors="0" graph-warnings="0" config-errors="1" config-warnings="0" >
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   <crm_xml >
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log     <transition_graph cluster-delay="60s" stonith-timeout="60s" failed-stop-offset="INFINITY" failed-start-offset="INFINITY" batch-limit="30" transition_id="0" >
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="0" priority="1000000" >
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="3" operation="probe_complete" operation_key="probe_complete" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_no_wait="true" crm_feature_set="3.0.6" />
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="1" priority="1000000" >
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="2" operation="probe_complete" operation_key="probe_complete" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_no_wait="true" crm_feature_set="3.0.6" />
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log     </transition_graph>
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   </crm_xml>
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log </create_reply_adv>
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_INVOKE
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: unpack_graph: Unpacked transition 0: 2 actions in 2 synapses
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: info: do_te_invoke: Processing graph 0 (ref=pe_calc-dc-1347283300-10) derived from /var/lib/pengine/pe-input-0.bz2
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 3: probe_complete probe_complete on Cluster-Server-2 (local) - no waiting
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: attrd_update_delegate: Sent update: probe_complete=true for localhost
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 2: probe_complete probe_complete on Cluster-Server-1 - no waiting
Sep 10 15:21:40 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from crmd: probe_complete=true
Sep 10 15:21:40 Cluster-Server-2 attrd: [40195]: info: find_hash_entry: Creating hash entry for probe_complete
Sep 10 15:21:40 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: true, Current: (null), Stored: (null)
Sep 10 15:21:40 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: New value of probe_complete is true
Sep 10 15:21:40 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Sep 10 15:21:40 Cluster-Server-2 pengine: [40196]: notice: process_pe_message: Transition 0: PEngine Input stored in: /var/lib/pengine/pe-input-0.bz2
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='probe_complete'] does not exist
Sep 10 15:21:40 Cluster-Server-2 pengine: [40196]: notice: process_pe_message: Configuration ERRORs found during PE processing.  Please run "crm_verify -L" to identify issues.
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 0 (Complete=0, Pending=0, Fired=2, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-0.bz2): In-progress
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for /cib (/cib)
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: notice: run_graph: ==== Transition 0 (Complete=2, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-0.bz2): Complete
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: te_graph_trigger: Transition 0 is now complete
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Transition 0 status: done - <null>
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_TE_SUCCESS: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_TE_SUCCESS from notify_crmd() received in state S_TRANSITION_ENGINE
Sep 10 15:21:40 Cluster-Server-2 attrd: [40195]: notice: attrd_perform_update: Sent update 4: probe_complete=true
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: Starting PEngine Recheck Timer
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started PEngine Recheck Timer (I_PE_CALC:900000ms), src=37
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:21:40 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 4 for probe_complete=true passed
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.4.10 -> 0.4.11 (S_IDLE)
Sep 10 15:21:40 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Sep 10 15:21:40 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-probe_complete" name="probe_complete" value="true" />
Sep 10 15:21:40 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 6 for probe_complete=true passed
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.4.11 -> 0.4.12 (S_IDLE)
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.4.12 -> 0.4.13 (S_IDLE)
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:40 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.4.13 -> 0.4.14 (S_IDLE)
Sep 10 15:21:41 Cluster-Server-1 cib: [48709]: info: apply_xml_diff: Digest mis-match: expected bdeb17841a8a81071a45941b740ffab0, calculated 7cefe735f2b561e5c0292c1589ee7e8c
Sep 10 15:21:41 Cluster-Server-1 cib: [48709]: notice: cib_process_diff: Diff 0.4.14 -> 0.5.1 not applied to 0.4.14: Failed application of an update diff
Sep 10 15:21:41 Cluster-Server-1 cib: [48709]: info: cib_server_process_diff: Requesting re-sync from peer
Sep 10 15:21:41 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair)
Sep 10 15:21:41 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-probe_complete" name="probe_complete" value="true" />
Sep 10 15:21:41 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Sep 10 15:21:41 Cluster-Server-1 crmd: [48715]: debug: crm_uptime: Current CPU usage is: 0s, 10000us
Sep 10 15:21:41 Cluster-Server-1 crmd: [48715]: debug: crm_compare_age: Loose: 10000 vs 30000 (usec)
Sep 10 15:21:41 Cluster-Server-1 crmd: [48715]: info: do_election_count_vote: Election 4 (owner: Cluster-Server-2) lost: vote from Cluster-Server-2 (Uptime)
Sep 10 15:21:41 Cluster-Server-1 crmd: [48715]: debug: update_dc: Unset DC. Was Cluster-Server-2
Sep 10 15:21:41 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Sep 10 15:21:41 Cluster-Server-1 crmd: [48715]: debug: do_election_check: Ignore election check: we not in an election
Sep 10 15:21:41 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_PENDING: [ state=S_NOT_DC cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Sep 10 15:21:41 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:21:41 Cluster-Server-1 crmd: [48715]: debug: do_log: FSA: Input I_PENDING from do_election_count_vote() received in state S_NOT_DC
Sep 10 15:21:41 Cluster-Server-1 crmd: [48715]: notice: do_state_transition: State transition S_NOT_DC -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Sep 10 15:21:41 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:21:41 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:21:41 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:21:41 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_START
Sep 10 15:21:41 Cluster-Server-1 crmd: [48715]: debug: crm_timer_start: Started Election Trigger (I_DC_TIMEOUT:20000ms), src=21
Sep 10 15:21:41 Cluster-Server-1 cib: [48709]: notice: cib_server_process_diff: Not applying diff 0.5.1 -> 0.5.2 (sync in progress)
Sep 10 15:21:41 Cluster-Server-1 cib: [48709]: notice: cib_server_process_diff: Not applying diff 0.5.2 -> 0.5.3 (sync in progress)
Sep 10 15:21:41 Cluster-Server-1 cib: [48709]: notice: cib_server_process_diff: Not applying diff 0.5.3 -> 0.5.4 (sync in progress)
Sep 10 15:21:41 Cluster-Server-1 cib: [48709]: debug: activateCibXml: Triggering CIB write for cib_replace op
Sep 10 15:21:41 Cluster-Server-1 cib: [48709]: info: cib_replace_notify: Replaced: -1.-1.-1 -> 0.5.4 from Cluster-Server-2
Sep 10 15:21:41 Cluster-Server-1 attrd: [48713]: info: do_cib_replaced: Sending full refresh
Sep 10 15:21:41 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Sep 10 15:21:41 Cluster-Server-1 crmd: [48715]: debug: do_cib_replaced: Updating the CIB after a replace: DC=false
Sep 10 15:21:41 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair)
Sep 10 15:21:41 Cluster-Server-1 cib: [48709]: debug: Forking temp process write_cib_contents
Sep 10 15:21:41 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-probe_complete" name="probe_complete" value="true" />
Sep 10 15:21:41 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 9 for probe_complete=true passed
Sep 10 15:21:41 Cluster-Server-1 crmd: [48715]: debug: handle_request: Raising I_JOIN_OFFER: join-2
Sep 10 15:21:41 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_JOIN_OFFER: [ state=S_PENDING cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:21:41 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:21:41 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_REQUEST
Sep 10 15:21:41 Cluster-Server-1 crmd: [48715]: info: update_dc: Set DC to Cluster-Server-2 (3.0.6)
Sep 10 15:21:41 Cluster-Server-1 crmd: [48715]: debug: do_cl_join_offer_respond: do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Sep 10 15:21:41 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:21:41 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 11 for probe_complete=true passed
Sep 10 15:21:41 Cluster-Server-1 crmd: [48715]: debug: join_query_callback: Respond to join offer join-2
Sep 10 15:21:41 Cluster-Server-1 crmd: [48715]: debug: join_query_callback: Acknowledging Cluster-Server-2 as our DC
Sep 10 15:21:41 Cluster-Server-1 crmd: [48715]: debug: handle_request: Raising I_JOIN_RESULT: join-2
Sep 10 15:21:41 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_PENDING cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:21:41 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:21:41 Cluster-Server-1 crmd: [48715]: debug: do_cl_join_finalize_respond: Confirming join join-2: join_ack_nack
Sep 10 15:21:41 Cluster-Server-1 crmd: [48715]: debug: do_cl_join_finalize_respond: join-2: Join complete.  Sending local LRM status to Cluster-Server-2
Sep 10 15:21:41 Cluster-Server-1 crmd: [48715]: debug: attrd_update_delegate: Sent update: (null)=(null) for localhost
Sep 10 15:21:41 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_NOT_DC: [ state=S_PENDING cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
Sep 10 15:21:41 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:21:41 Cluster-Server-1 crmd: [48715]: debug: do_log: FSA: Input I_NOT_DC from do_cl_join_finalize_respond() received in state S_PENDING
Sep 10 15:21:41 Cluster-Server-1 crmd: [48715]: notice: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
Sep 10 15:21:41 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:21:41 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:21:41 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:21:41 Cluster-Server-1 attrd: [48713]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
Sep 10 15:21:41 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Sep 10 15:21:41 Cluster-Server-1 cib: [48709]: debug: activateCibXml: Triggering CIB write for cib_replace op
Sep 10 15:21:41 Cluster-Server-2 cib: [40192]: debug: activateCibXml: Triggering CIB write for cib_replace op
Sep 10 15:21:41 Cluster-Server-2 cib: [40192]: info: cib_replace_notify: Replaced: 0.4.14 -> 0.5.1 from Cluster-Server-1
Sep 10 15:21:41 Cluster-Server-2 cib: [40192]: info: cib:diff: - <cib admin_epoch="0" epoch="4" num_updates="14" />
Sep 10 15:21:41 Cluster-Server-2 cib: [40192]: info: cib:diff: + <cib epoch="5" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="Cluster-Server-2" update-client="crmd" cib-last-written="Mon Sep 10 15:21:40 2012" have-quorum="1" dc-uuid="Cluster-Server-2" >
Sep 10 15:21:41 Cluster-Server-2 cib: [40192]: info: cib:diff: +   <configuration >
Sep 10 15:21:41 Cluster-Server-2 cib: [40192]: info: cib:diff: +     <crm_config >
Sep 10 15:21:41 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <cluster_property_set id="cib-bootstrap-options" >
Sep 10 15:21:41 Cluster-Server-2 cib: [40192]: info: cib:diff: +         <nvpair id="cib-bootstrap-options-stonith-enabled" name="stonith-enabled" value="false" __crm_diff_marker__="added:top" />
Sep 10 15:21:41 Cluster-Server-2 cib: [40192]: info: cib:diff: +         <nvpair id="cib-bootstrap-options-no-quorum-policy" name="no-quorum-policy" value="ignore" __crm_diff_marker__="added:top" />
Sep 10 15:21:41 Cluster-Server-2 cib: [40192]: info: cib:diff: +         <nvpair id="cib-bootstrap-options-pe-warn-series-max" name="pe-warn-series-max" value="1000" __crm_diff_marker__="added:top" />
Sep 10 15:21:41 Cluster-Server-2 cib: [40192]: info: cib:diff: +         <nvpair id="cib-bootstrap-options-pe-error-series-max" name="pe-error-series-max" value="2000" __crm_diff_marker__="added:top" />
Sep 10 15:21:41 Cluster-Server-2 cib: [40192]: info: cib:diff: +         <nvpair id="cib-bootstrap-options-pe-input-series-max" name="pe-input-series-max" value="1000" __crm_diff_marker__="added:top" />
Sep 10 15:21:41 Cluster-Server-2 cib: [40192]: info: cib:diff: +       </cluster_property_set>
Sep 10 15:21:41 Cluster-Server-2 cib: [40192]: info: cib:diff: +     </crm_config>
Sep 10 15:21:41 Cluster-Server-2 cib: [40192]: info: cib:diff: +   </configuration>
Sep 10 15:21:41 Cluster-Server-2 cib: [40192]: info: cib:diff: + </cib>
Sep 10 15:21:41 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=Cluster-Server-1/cibadmin/2, version=0.5.1): ok (rc=0)
Sep 10 15:21:41 Cluster-Server-2 attrd: [40195]: info: do_cib_replaced: Sending full refresh
Sep 10 15:21:41 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_replace): 0.4.14 -> 0.5.1 (S_IDLE)
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.5.1) : Non-status change
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause <diff crm_feature_set="3.0.6" >
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   <diff-removed admin_epoch="0" epoch="4" num_updates="14" >
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     <cib admin_epoch="0" epoch="4" num_updates="14" />
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   </diff-removed>
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   <diff-added >
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     <cib epoch="5" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="Cluster-Server-2" update-client="crmd" cib-last-written="Mon Sep 10 15:21:40 2012" have-quorum="1" dc-uuid="Cluster-Server-2" >
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       <configuration >
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         <crm_config >
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <cluster_property_set id="cib-bootstrap-options" >
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <nvpair id="cib-bootstrap-options-stonith-enabled" name="stonith-enabled" value="false" __crm_diff_marker__="added:top" />
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <nvpair id="cib-bootstrap-options-no-quorum-policy" name="no-quorum-policy" value="ignore" __crm_diff_marker__="added:top" />
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <nvpair id="cib-bootstrap-options-pe-warn-series-max" name="pe-warn-series-max" value="1000" __crm_diff_marker__="added:top" />
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <nvpair id="cib-bootstrap-options-pe-error-series-max" name="pe-error-series-max" value="2000" __crm_diff_marker__="added:top" />
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <nvpair id="cib-bootstrap-options-pe-input-series-max" name="pe-input-series-max" value="1000" __crm_diff_marker__="added:top" />
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           </cluster_property_set>
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         </crm_config>
Sep 10 15:21:41 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       </configuration>
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     </cib>
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   </diff-added>
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause </diff>
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_cib_replaced: Updating the CIB after a replace: DC=true
Sep 10 15:21:41 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-probe_complete" name="probe_complete" value="true" />
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-1: true (overwrite=true) hash_size=2
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-2: true (overwrite=true) hash_size=2
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_IDLE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes are eligible to run resources.
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:21:41 Cluster-Server-2 cib: [40192]: debug: Forking temp process write_cib_contents
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 31: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: update_dc: Unset DC. Was Cluster-Server-2
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: crm_uptime: Current CPU usage is: 0s, 30000us
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_election_vote: Started election 4
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started Election Timeout (I_ELECTION_DC:120000ms), src=41
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_election_count_vote: Created voted hash
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: crm_uptime: Current CPU usage is: 0s, 30000us
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_election_count_vote: Election 4 (current: 4, owner: Cluster-Server-2): Processed vote from Cluster-Server-2 (Recorded)
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Sep 10 15:21:41 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/29, version=0.5.2): ok (rc=0)
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke_callback: Discarding PE request in state: S_ELECTION
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Call 32 : Parsing CIB options
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Shutdown escalation occurs after: 1200000ms
Sep 10 15:21:41 Cluster-Server-2 cib: [40192]: debug: sync_our_cib: Syncing CIB to Cluster-Server-1
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Checking for expired actions every 900000ms
Sep 10 15:21:41 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 8 for probe_complete=true passed
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:41 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=Cluster-Server-1/Cluster-Server-1/(null), version=0.5.4): ok (rc=0)
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: crm_uptime: Current CPU usage is: 0s, 30000us
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: crm_compare_age: Win: 30000 vs 0  (usec)
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_election_count_vote: Election 4 (current: 4, owner: Cluster-Server-2): Processed no-vote from Cluster-Server-1 (Recorded)
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_election_check: Destroying voted hash
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_ELECTION_DC: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_election_check ]
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_ELECTION_DC from do_election_check() received in state S_ELECTION
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_START
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_te_control: The transitioner is already active
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_START
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pengine
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_START
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started Integration Timer (I_INTEGRATED:180000ms), src=44
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TAKEOVER
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: info: do_dc_takeover: Taking over DC status for this partition
Sep 10 15:21:41 Cluster-Server-2 cib: [40192]: debug: cib_process_readwrite: We are still in R/W mode
Sep 10 15:21:41 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/33, version=0.5.6): ok (rc=0)
Sep 10 15:21:41 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/34, version=0.5.7): ok (rc=0)
Sep 10 15:21:41 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//cluster_property_set//nvpair[@name='dc-version'] (/cib/configuration/crm_config/cluster_property_set/nvpair[1])
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: find_nvpair_attr_delegate: Match <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff" />
Sep 10 15:21:41 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/36, version=0.5.8): ok (rc=0)
Sep 10 15:21:41 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//cluster_property_set//nvpair[@name='cluster-infrastructure'] (/cib/configuration/crm_config/cluster_property_set/nvpair[2])
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: find_nvpair_attr_delegate: Match <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="cman" />
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_OFFER_ALL
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: initialize_join: join-2: Initializing join data (flag=true)
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: join_make_offer: join-2: Sending offer to Cluster-Server-1
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: join_make_offer: join-2: Sending offer to Cluster-Server-2
Sep 10 15:21:41 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Sep 10 15:21:41 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-probe_complete" name="probe_complete" value="true" />
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: info: do_dc_join_offer_all: join-2: Waiting on 2 outstanding join acks
Sep 10 15:21:41 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/38, version=0.5.10): ok (rc=0)
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:41 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 10 for probe_complete=true passed
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Call 39 : Parsing CIB options
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Shutdown escalation occurs after: 1200000ms
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Checking for expired actions every 900000ms
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: handle_request: Raising I_JOIN_OFFER: join-2
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_OFFER: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_REQUEST
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: info: update_dc: Set DC to Cluster-Server-2 (3.0.6)
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_cl_join_offer_respond: do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: join_query_callback: Respond to join offer join-2
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: join_query_callback: Acknowledging Cluster-Server-2 as our DC
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_REQ
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: Processing req from Cluster-Server-1
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: join-2: Welcoming node Cluster-Server-1 (ref join_request-crmd-1347283301-7)
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: 1 nodes have been integrated into join-2
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: join-2: Still waiting on 1 outstanding offers
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_REQ
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: Processing req from Cluster-Server-2
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: Cluster-Server-2 has a better generation number than the current max Cluster-Server-1
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: Max generation <generation_tuple epoch="5" num_updates="9" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="Cluster-Server-1" update-client="cibadmin" cib-last-written="Mon Sep 10 15:21:41 2012" have-quorum="1" dc-uuid="Cluster-Server-2" />
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: Their generation <generation_tuple epoch="5" num_updates="11" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="Cluster-Server-1" update-client="cibadmin" cib-last-written="Mon Sep 10 15:21:41 2012" have-quorum="1" dc-uuid="Cluster-Server-2" />
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: join-2: Welcoming node Cluster-Server-2 (ref join_request-crmd-1347283301-16)
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: 2 nodes have been integrated into join-2
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: check_join_state: join-2: Integration of 2 peers complete: do_dc_join_filter_offer
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_INTEGRATED: [ state=S_INTEGRATION cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes responded to the join offer.
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_START
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started Finalization Timer (I_ELECTION:1800000ms), src=48
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_FINALIZE
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_finalize: Finializing join-2 for 2 clients
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: info: do_dc_join_finalize: join-2: Syncing the CIB from Cluster-Server-2 to the rest of the cluster
Sep 10 15:21:41 Cluster-Server-2 cib: [40192]: info: Managed write_cib_contents process 42540 exited with return code 0.
Sep 10 15:21:41 Cluster-Server-2 cib: [40192]: debug: sync_our_cib: Syncing CIB to all peers
Sep 10 15:21:41 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/41, version=0.5.11): ok (rc=0)
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by finalize_sync_callback in state: S_FINALIZE_JOIN
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: check_join_state: join-2: Still waiting on 2 integrated nodes
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: finalize_sync_callback: Notifying 2 clients of join-2 results
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: finalize_join_for: join-2: ACK'ing join request from Cluster-Server-1, state member
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: finalize_join_for: join-2: ACK'ing join request from Cluster-Server-2, state member
Sep 10 15:21:41 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/42, version=0.5.12): ok (rc=0)
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: handle_request: Raising I_JOIN_RESULT: join-2
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_cl_join_finalize_respond: Confirming join join-2: join_ack_nack
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_cl_join_finalize_respond: join-2: Join complete.  Sending local LRM status to Cluster-Server-2
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_ack: Ignoring op=join_ack_nack message from Cluster-Server-2
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Sep 10 15:21:41 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/43, version=0.5.13): ok (rc=0)
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: info: do_dc_join_ack: join-2: Updating node state to member for Cluster-Server-2
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: info: erase_status_tag: Deleting xpath: //node_state[@uname='Cluster-Server-2']/lrm
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_ack: join-2: Registered callback for LRM update 45
Sep 10 15:21:41 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-2']/lrm (/cib/status/node_state[2]/lrm)
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:41 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-2']/lrm (origin=local/crmd/44, version=0.5.14): ok (rc=0)
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: info: do_dc_join_ack: join-2: Updating node state to member for Cluster-Server-1
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: info: erase_status_tag: Deleting xpath: //node_state[@uname='Cluster-Server-1']/lrm
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_ack: join-2: Registered callback for LRM update 47
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: erase_xpath_callback: Deletion of "//node_state[@uname='Cluster-Server-2']/lrm": ok (rc=0)
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: join_update_complete_callback: Join update 45 complete
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by join_update_complete_callback in state: S_FINALIZE_JOIN
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: check_join_state: join-2 complete: join_update_complete_callback
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_FINALIZED: [ state=S_FINALIZE_JOIN cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-1: true (overwrite=true) hash_size=2
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-2: true (overwrite=true) hash_size=2
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes are eligible to run resources.
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_FINAL
Sep 10 15:21:41 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-1']/lrm (/cib/status/node_state[1]/lrm)
Sep 10 15:21:41 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-1']/lrm (origin=local/crmd/46, version=0.5.16): ok (rc=0)
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_final: Ensuring DC, quorum and node attributes are up-to-date
Sep 10 15:21:41 Cluster-Server-2 attrd: [40195]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
Sep 10 15:21:41 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: attrd_update_delegate: Sent update: (null)=(null) for localhost
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: crm_update_quorum: Updating quorum status to true (call=50)
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_CANCEL
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_te_invoke: Cancelling the transition: inactive
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: do_te_invoke:162 - Triggered transition abort (complete=1) : Peer Cancelled
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 51: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_delete): 0.5.15 -> 0.5.16 (S_POLICY_ENGINE)
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: erase_xpath_callback: Deletion of "//node_state[@uname='Cluster-Server-1']/lrm": ok (rc=0)
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.5.16 -> 0.5.17 (S_POLICY_ENGINE)
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: join_update_complete_callback: Join update 47 complete
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by join_update_complete_callback in state: S_POLICY_ENGINE
Sep 10 15:21:41 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/48, version=0.5.18): ok (rc=0)
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:41 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.5.17 -> 0.5.18 (S_POLICY_ENGINE)
Sep 10 15:21:42 Cluster-Server-1 cib: [48709]: info: Managed write_cib_contents process 49299 exited with return code 0.
Sep 10 15:21:42 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair)
Sep 10 15:21:42 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-probe_complete" name="probe_complete" value="true" />
Sep 10 15:21:42 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair)
Sep 10 15:21:42 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-probe_complete" name="probe_complete" value="true" />
Sep 10 15:21:42 Cluster-Server-1 cib: [48709]: debug: Forking temp process write_cib_contents
Sep 10 15:21:42 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 13 for probe_complete=true passed
Sep 10 15:21:42 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 15 for probe_complete=true passed
Sep 10 15:21:42 Cluster-Server-1 cib: [48709]: info: Managed write_cib_contents process 49302 exited with return code 0.
Sep 10 15:21:42 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:42 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.5.18 -> 0.5.19 (S_POLICY_ENGINE)
Sep 10 15:21:42 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/50, version=0.5.20): ok (rc=0)
Sep 10 15:21:42 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:42 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.5.19 -> 0.5.20 (S_POLICY_ENGINE)
Sep 10 15:21:42 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Sep 10 15:21:42 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-probe_complete" name="probe_complete" value="true" />
Sep 10 15:21:42 Cluster-Server-2 pengine: [40196]: info: unpack_config: Startup probes: enabled
Sep 10 15:21:42 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH timeout: 60000
Sep 10 15:21:42 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH of failed nodes is disabled
Sep 10 15:21:42 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Stop all active resources: false
Sep 10 15:21:42 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Sep 10 15:21:42 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Default stickiness: 0
Sep 10 15:21:42 Cluster-Server-2 pengine: [40196]: notice: unpack_config: On loss of CCM Quorum: Ignore
Sep 10 15:21:42 Cluster-Server-2 pengine: [40196]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Sep 10 15:21:42 Cluster-Server-2 pengine: [40196]: info: unpack_domains: Unpacking domains
Sep 10 15:21:42 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-1 is online
Sep 10 15:21:42 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-2 is online
Sep 10 15:21:42 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 12 for probe_complete=true passed
Sep 10 15:21:42 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke_callback: Invoking the PE: query=51, ref=pe_calc-dc-1347283302-20, seq=312, quorate=1
Sep 10 15:21:42 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Sep 10 15:21:42 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-probe_complete" name="probe_complete" value="true" />
Sep 10 15:21:42 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:42 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.5.20 -> 0.5.21 (S_POLICY_ENGINE)
Sep 10 15:21:42 Cluster-Server-2 pengine: [40196]: notice: process_pe_message: Transition 1: PEngine Input stored in: /var/lib/pengine/pe-input-1.bz2
Sep 10 15:21:42 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:21:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:21:42 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_PE_SUCCESS from handle_response() received in state S_POLICY_ENGINE
Sep 10 15:21:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log <create_reply_adv origin="process_pe_message" t="crmd" version="3.0.6" subt="response" reference="pe_calc-dc-1347283302-20" crm_task="pe_calc" crm_sys_to="dc" crm_sys_from="pengine" crm-tgraph-in="/var/lib/pengine/pe-input-1.bz2" graph-errors="0" graph-warnings="0" config-errors="0" config-warnings="0" >
Sep 10 15:21:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   <crm_xml >
Sep 10 15:21:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log     <transition_graph cluster-delay="60s" stonith-timeout="60s" failed-stop-offset="INFINITY" failed-start-offset="INFINITY" batch-limit="30" transition_id="1" />
Sep 10 15:21:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   </crm_xml>
Sep 10 15:21:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log </create_reply_adv>
Sep 10 15:21:42 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:21:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:21:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:21:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:21:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_INVOKE
Sep 10 15:21:42 Cluster-Server-2 crmd: [40197]: debug: unpack_graph: Unpacked transition 1: 0 actions in 0 synapses
Sep 10 15:21:42 Cluster-Server-2 crmd: [40197]: info: do_te_invoke: Processing graph 1 (ref=pe_calc-dc-1347283302-20) derived from /var/lib/pengine/pe-input-1.bz2
Sep 10 15:21:42 Cluster-Server-2 crmd: [40197]: debug: print_graph: ## Empty transition graph ##
Sep 10 15:21:42 Cluster-Server-2 crmd: [40197]: notice: run_graph: ==== Transition 1 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-1.bz2): Complete
Sep 10 15:21:42 Cluster-Server-2 crmd: [40197]: debug: print_graph: ## Empty transition graph ##
Sep 10 15:21:42 Cluster-Server-2 crmd: [40197]: debug: te_graph_trigger: Transition 1 is now complete
Sep 10 15:21:42 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE
Sep 10 15:21:42 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Transition 1 status: done - <null>
Sep 10 15:21:42 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_TE_SUCCESS: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:21:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:21:42 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 14 for probe_complete=true passed
Sep 10 15:21:42 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_TE_SUCCESS from notify_crmd() received in state S_TRANSITION_ENGINE
Sep 10 15:21:42 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:21:42 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: Starting PEngine Recheck Timer
Sep 10 15:21:42 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started PEngine Recheck Timer (I_PE_CALC:900000ms), src=58
Sep 10 15:21:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:21:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:21:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:21:42 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:42 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.5.21 -> 0.5.22 (S_IDLE)
Sep 10 15:21:42 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:42 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.5.22 -> 0.5.23 (S_IDLE)
Sep 10 15:21:42 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:42 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.5.23 -> 0.5.24 (S_IDLE)
Sep 10 15:21:43 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [49336] registered
Sep 10 15:21:43 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:49336] disconnected.
Sep 10 15:21:43 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:49336] is unregistered
Sep 10 15:21:43 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [49338] registered
Sep 10 15:21:43 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:49338] disconnected.
Sep 10 15:21:43 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:49338] is unregistered
Sep 10 15:21:43 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [49340] registered
Sep 10 15:21:43 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:49340] disconnected.
Sep 10 15:21:43 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:49340] is unregistered
Sep 10 15:21:43 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [49342] registered
Sep 10 15:21:43 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:49342] disconnected.
Sep 10 15:21:43 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:49342] is unregistered
Sep 10 15:21:43 Cluster-Server-1 cib: [48709]: debug: activateCibXml: Triggering CIB write for cib_apply_diff op
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: debug: crm_uptime: Current CPU usage is: 0s, 10000us
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: debug: crm_compare_age: Loose: 10000 vs 50000 (usec)
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: info: do_election_count_vote: Election 5 (owner: Cluster-Server-2) lost: vote from Cluster-Server-2 (Uptime)
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: debug: update_dc: Unset DC. Was Cluster-Server-2
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: debug: do_election_check: Ignore election check: we not in an election
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_PENDING: [ state=S_NOT_DC cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: debug: do_log: FSA: Input I_PENDING from do_election_count_vote() received in state S_NOT_DC
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: notice: do_state_transition: State transition S_NOT_DC -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_START
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: debug: crm_timer_start: Started Election Trigger (I_DC_TIMEOUT:20000ms), src=23
Sep 10 15:21:43 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair)
Sep 10 15:21:43 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-probe_complete" name="probe_complete" value="true" />
Sep 10 15:21:43 Cluster-Server-1 cib: [48709]: debug: Forking temp process write_cib_contents
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: debug: handle_request: Raising I_JOIN_OFFER: join-3
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_JOIN_OFFER: [ state=S_PENDING cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_REQUEST
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: info: update_dc: Set DC to Cluster-Server-2 (3.0.6)
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: debug: do_cl_join_offer_respond: do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:21:43 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 17 for probe_complete=true passed
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: debug: join_query_callback: Respond to join offer join-3
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: debug: join_query_callback: Acknowledging Cluster-Server-2 as our DC
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: debug: handle_request: Raising I_JOIN_RESULT: join-3
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_PENDING cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: debug: do_cl_join_finalize_respond: Confirming join join-3: join_ack_nack
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: debug: do_cl_join_finalize_respond: join-3: Join complete.  Sending local LRM status to Cluster-Server-2
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: debug: attrd_update_delegate: Sent update: (null)=(null) for localhost
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_NOT_DC: [ state=S_PENDING cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: debug: do_log: FSA: Input I_NOT_DC from do_cl_join_finalize_respond() received in state S_PENDING
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: notice: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:21:43 Cluster-Server-1 attrd: [48713]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:21:43 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Sep 10 15:21:43 Cluster-Server-1 cib: [48709]: debug: activateCibXml: Triggering CIB write for cib_replace op
Sep 10 15:21:43 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair)
Sep 10 15:21:43 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-probe_complete" name="probe_complete" value="true" />
Sep 10 15:21:43 Cluster-Server-1 cib: [48709]: info: Managed write_cib_contents process 49356 exited with return code 0.
Sep 10 15:21:43 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 19 for probe_complete=true passed
Sep 10 15:21:43 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair)
Sep 10 15:21:43 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-probe_complete" name="probe_complete" value="true" />
Sep 10 15:21:43 Cluster-Server-1 cib: [48709]: debug: Forking temp process write_cib_contents
Sep 10 15:21:43 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 21 for probe_complete=true passed
Sep 10 15:21:43 Cluster-Server-1 cib: [48709]: info: Managed write_cib_contents process 49359 exited with return code 0.
Sep 10 15:21:43 Cluster-Server-1 lrmd: [48712]: debug: on_msg_add_rsc:client [48715] adds resource p_NFS_Server:0
Sep 10 15:21:43 Cluster-Server-1 lrmd: [48712]: notice: lrmd_rsc_new(): No lrm_rprovider field in message
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=4:2:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=p_NFS_Server:0_monitor_0
Sep 10 15:21:43 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op:2399: copying parameters for rsc p_NFS_Server:0
Sep 10 15:21:43 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation monitor[2] on p_NFS_Server:0 for client 48715, its parameters: crm_feature_set=[3.0.6] CRM_meta_notify=[false] CRM_meta_clone=[0] CRM_meta_globally_unique=[false] CRM_meta_clone_node_max=[1] CRM_meta_timeout=[20000] CRM_meta_clone_max=[2]  to the operation list.
Sep 10 15:21:43 Cluster-Server-1 lrmd: [48712]: info: rsc:p_NFS_Server:0 probe[2] (pid 49362)
Sep 10 15:21:43 Cluster-Server-1 lrmd: [48712]: debug: on_msg_add_rsc:client [48715] adds resource p_iSCSI_Daemon:0
Sep 10 15:21:43 Cluster-Server-1 lrmd: [48712]: notice: lrmd_rsc_new(): No lrm_rprovider field in message
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=5:2:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=p_iSCSI_Daemon:0_monitor_0
Sep 10 15:21:43 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op:2399: copying parameters for rsc p_iSCSI_Daemon:0
Sep 10 15:21:43 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation monitor[3] on p_iSCSI_Daemon:0 for client 48715, its parameters: crm_feature_set=[3.0.6] CRM_meta_notify=[false] CRM_meta_clone=[0] CRM_meta_globally_unique=[false] CRM_meta_clone_node_max=[1] CRM_meta_timeout=[20000] CRM_meta_clone_max=[2]  to the operation list.
Sep 10 15:21:43 Cluster-Server-1 lrmd: [48712]: info: rsc:p_iSCSI_Daemon:0 probe[3] (pid 49363)
Sep 10 15:21:43 Cluster-Server-1 lrmd: [48712]: debug: on_msg_add_rsc:client [48715] adds resource p_PingD:0
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=6:2:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=p_PingD:0_monitor_0
Sep 10 15:21:43 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op:2399: copying parameters for rsc p_PingD:0
Sep 10 15:21:43 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation monitor[4] on p_PingD:0 for client 48715, its parameters: CRM_meta_timeout=[20000] multiplier=[100] CRM_meta_clone_node_max=[1] CRM_meta_notify=[false] crm_feature_set=[3.0.6] CRM_meta_clone=[0] host_list=[192.168.1.1] CRM_meta_clone_max=[2] CRM_meta_globally_unique=[false]  to the operation list.
Sep 10 15:21:43 Cluster-Server-1 lrmd: [48712]: info: rsc:p_PingD:0 probe[4] (pid 49364)
Sep 10 15:21:43 Cluster-Server-1 cib: [48709]: debug: activateCibXml: Triggering CIB write for cib_replace op
Sep 10 15:21:43 Cluster-Server-1 cib: [48709]: WARN: send_ipc_message: IPC Channel to 49361 is not connected
Sep 10 15:21:43 Cluster-Server-1 cib: [48709]: WARN: send_via_callback_channel: Delivery of reply to client cibadmin/7966687b-6794-4753-b783-bd4a31539744 failed
Sep 10 15:21:43 Cluster-Server-1 cib: [48709]: WARN: do_local_notify: Sync reply to cibadmin failed: reply failed
Sep 10 15:21:43 Cluster-Server-1 cib: [48709]: debug: Forking temp process write_cib_contents
Sep 10 15:21:43 Cluster-Server-1 lrmd: [48712]: info: Managed p_iSCSI_Daemon:0:monitor process 49363 exited with return code 0.
Sep 10 15:21:43 Cluster-Server-1 lrmd: [48712]: info: operation monitor[3] on p_iSCSI_Daemon:0 for client 48715: pid 49363 exited with return code 0
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce p_iSCSI_Daemon:0 after complete monitor op (interval=0)
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation p_iSCSI_Daemon:0_monitor_0 (call=3, rc=0, cib-update=15, confirmed=true) ok
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending monitor op to history for 'p_iSCSI_Daemon:0'
Sep 10 15:21:43 Cluster-Server-1 lrmd: [48712]: WARN: Managed p_NFS_Server:0:monitor process 49362 exited with return code 3.
Sep 10 15:21:43 Cluster-Server-1 lrmd: [48712]: info: operation monitor[2] on p_NFS_Server:0 for client 48715: pid 49362 exited with return code 7 (mapped from 3)
Sep 10 15:21:43 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_NFS_Server:0:monitor:stdout) nfsd not running

Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce p_NFS_Server:0 after complete monitor op (interval=0)
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation p_NFS_Server:0_monitor_0 (call=2, rc=7, cib-update=16, confirmed=true) not running
Sep 10 15:21:43 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending monitor op to history for 'p_NFS_Server:0'
Sep 10 15:21:43 Cluster-Server-1 cib: [48709]: info: Managed write_cib_contents process 49371 exited with return code 0.
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: debug: activateCibXml: Triggering CIB write for cib_replace op
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib_replace_notify: Replaced: 0.5.24 -> 0.6.1 from Cluster-Server-1
Sep 10 15:21:43 Cluster-Server-2 attrd: [40195]: info: do_cib_replaced: Sending full refresh
Sep 10 15:21:43 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_replace): 0.5.24 -> 0.6.1 (S_IDLE)
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.6.1) : Non-status change
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause <diff crm_feature_set="3.0.6" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   <diff-removed admin_epoch="0" epoch="5" num_updates="24" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     <cib admin_epoch="0" epoch="5" num_updates="24" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   </diff-removed>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   <diff-added >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     <cib epoch="6" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="Cluster-Server-1" update-client="cibadmin" cib-last-written="Mon Sep 10 15:21:41 2012" have-quorum="1" dc-uuid="Cluster-Server-2" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       <configuration >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         <resources >
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: - <cib admin_epoch="0" epoch="5" num_updates="24" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <clone id="NFS_Server" __crm_diff_marker__="added:top" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <meta_attributes id="NFS_Server-meta_attributes" >
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: + <cib epoch="6" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="Cluster-Server-1" update-client="cibadmin" cib-last-written="Mon Sep 10 15:21:41 2012" have-quorum="1" dc-uuid="Cluster-Server-2" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <nvpair id="NFS_Server-meta_attributes-interleave" name="interleave" value="true" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <nvpair id="NFS_Server-meta_attributes-resource-stickiness" name="resource-stickiness" value="0" />
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +   <configuration >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <nvpair id="NFS_Server-meta_attributes-target-role" name="target-role" value="Started" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             </meta_attributes>
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +     <resources >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <primitive class="lsb" id="p_NFS_Server" type="nfs-kernel-server" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <operations >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="p_NFS_Server-monitor-30s" interval="30s" name="monitor" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </operations>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             </primitive>
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <clone id="NFS_Server" __crm_diff_marker__="added:top" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           </clone>
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +         <meta_attributes id="NFS_Server-meta_attributes" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <clone id="iSCSI_Daemon" __crm_diff_marker__="added:top" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <meta_attributes id="iSCSI_Daemon-meta_attributes" >
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <nvpair id="NFS_Server-meta_attributes-interleave" name="interleave" value="true" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <nvpair id="iSCSI_Daemon-meta_attributes-interleave" name="interleave" value="true" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <nvpair id="iSCSI_Daemon-meta_attributes-resource-stickiness" name="resource-stickiness" value="0" />
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <nvpair id="NFS_Server-meta_attributes-resource-stickiness" name="resource-stickiness" value="0" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <nvpair id="iSCSI_Daemon-meta_attributes-target-role" name="target-role" value="Started" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             </meta_attributes>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <primitive class="lsb" id="p_iSCSI_Daemon" type="iscsi-scst" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <operations >
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <nvpair id="NFS_Server-meta_attributes-target-role" name="target-role" value="Started" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="p_iSCSI_Daemon-monitor-30s" interval="30s" name="monitor" />
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +         </meta_attributes>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </operations>
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +         <primitive class="lsb" id="p_NFS_Server" type="nfs-kernel-server" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             </primitive>
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <operations >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           </clone>
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <op id="p_NFS_Server-monitor-30s" interval="30s" name="monitor" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <clone id="PingD" __crm_diff_marker__="added:top" >
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +           </operations>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <meta_attributes id="PingD-meta_attributes" >
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +         </primitive>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <nvpair id="PingD-meta_attributes-interleave" name="interleave" value="true" />
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +       </clone>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <nvpair id="PingD-meta_attributes-resource-sticikness" name="resource-sticikness" value="0" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <nvpair id="PingD-meta_attributes-target-role" name="target-role" value="Started" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             </meta_attributes>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <primitive class="ocf" id="p_PingD" provider="pacemaker" type="ping" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <instance_attributes id="p_PingD-instance_attributes" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="p_PingD-instance_attributes-host_list" name="host_list" value="192.168.1.1" />
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <clone id="iSCSI_Daemon" __crm_diff_marker__="added:top" >
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +         <meta_attributes id="iSCSI_Daemon-meta_attributes" >
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <nvpair id="iSCSI_Daemon-meta_attributes-interleave" name="interleave" value="true" />
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <nvpair id="iSCSI_Daemon-meta_attributes-resource-stickiness" name="resource-stickiness" value="0" />
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <nvpair id="iSCSI_Daemon-meta_attributes-target-role" name="target-role" value="Started" />
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +         </meta_attributes>
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +         <primitive class="lsb" id="p_iSCSI_Daemon" type="iscsi-scst" >
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <operations >
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <op id="p_iSCSI_Daemon-monitor-30s" interval="30s" name="monitor" />
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +           </operations>
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +         </primitive>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="p_PingD-instance_attributes-multiplier" name="multiplier" value="100" />
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +       </clone>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </instance_attributes>
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <clone id="PingD" __crm_diff_marker__="added:top" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <operations >
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +         <meta_attributes id="PingD-meta_attributes" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="p_PingD-monitor-10s" interval="10s" name="monitor" timeout="5s" />
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <nvpair id="PingD-meta_attributes-interleave" name="interleave" value="true" />
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <nvpair id="PingD-meta_attributes-resource-sticikness" name="resource-sticikness" value="0" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </operations>
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <nvpair id="PingD-meta_attributes-target-role" name="target-role" value="Started" />
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +         </meta_attributes>
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +         <primitive class="ocf" id="p_PingD" provider="pacemaker" type="ping" >
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <instance_attributes id="p_PingD-instance_attributes" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             </primitive>
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="p_PingD-instance_attributes-host_list" name="host_list" value="192.168.1.1" />
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="p_PingD-instance_attributes-multiplier" name="multiplier" value="100" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           </clone>
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +           </instance_attributes>
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <operations >
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <op id="p_PingD-monitor-10s" interval="10s" name="monitor" timeout="5s" />
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +           </operations>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         </resources>
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +         </primitive>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       </configuration>
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +       </clone>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     </cib>
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +     </resources>
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +   </configuration>
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib:diff: + </cib>
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=Cluster-Server-1/cibadmin/2, version=0.6.1): ok (rc=0)
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   </diff-added>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause </diff>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_cib_replaced: Updating the CIB after a replace: DC=true
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-1: true (overwrite=true) hash_size=2
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-2: true (overwrite=true) hash_size=2
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_IDLE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes are eligible to run resources.
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 54: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: update_dc: Unset DC. Was Cluster-Server-2
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/52, version=0.6.2): ok (rc=0)
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: crm_uptime: Current CPU usage is: 0s, 50000us
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_election_vote: Started election 5
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started Election Timeout (I_ELECTION_DC:120000ms), src=62
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_election_count_vote: Created voted hash
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: crm_uptime: Current CPU usage is: 0s, 50000us
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_election_count_vote: Election 5 (current: 5, owner: Cluster-Server-2): Processed vote from Cluster-Server-2 (Recorded)
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Sep 10 15:21:43 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-probe_complete" name="probe_complete" value="true" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke_callback: Discarding PE request in state: S_ELECTION
Sep 10 15:21:43 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 16 for probe_complete=true passed
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: crm_uptime: Current CPU usage is: 0s, 50000us
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: crm_compare_age: Win: 50000 vs 0  (usec)
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_election_count_vote: Election 5 (current: 5, owner: Cluster-Server-2): Processed no-vote from Cluster-Server-1 (Recorded)
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_election_check: Destroying voted hash
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_ELECTION_DC: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_election_check ]
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_ELECTION_DC from do_election_check() received in state S_ELECTION
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_START
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: debug: Forking temp process write_cib_contents
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_te_control: The transitioner is already active
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_START
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pengine
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_START
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started Integration Timer (I_INTEGRATED:180000ms), src=64
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TAKEOVER
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: info: do_dc_takeover: Taking over DC status for this partition
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: debug: cib_process_readwrite: We are still in R/W mode
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/55, version=0.6.5): ok (rc=0)
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/56, version=0.6.6): ok (rc=0)
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//cluster_property_set//nvpair[@name='dc-version'] (/cib/configuration/crm_config/cluster_property_set/nvpair[1])
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: find_nvpair_attr_delegate: Match <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff" />
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/58, version=0.6.7): ok (rc=0)
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//cluster_property_set//nvpair[@name='cluster-infrastructure'] (/cib/configuration/crm_config/cluster_property_set/nvpair[2])
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: find_nvpair_attr_delegate: Match <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="cman" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_OFFER_ALL
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: initialize_join: join-3: Initializing join data (flag=true)
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: join_make_offer: join-3: Sending offer to Cluster-Server-1
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: join_make_offer: join-3: Sending offer to Cluster-Server-2
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: info: do_dc_join_offer_all: join-3: Waiting on 2 outstanding join acks
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/60, version=0.6.9): ok (rc=0)
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: handle_request: Raising I_JOIN_OFFER: join-3
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_OFFER: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_REQUEST
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: info: update_dc: Set DC to Cluster-Server-2 (3.0.6)
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_cl_join_offer_respond: do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Call 61 : Parsing CIB options
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Shutdown escalation occurs after: 1200000ms
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Checking for expired actions every 900000ms
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: join_query_callback: Respond to join offer join-3
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: join_query_callback: Acknowledging Cluster-Server-2 as our DC
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_REQ
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: Processing req from Cluster-Server-2
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: join-3: Welcoming node Cluster-Server-2 (ref join_request-crmd-1347283303-24)
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: 1 nodes have been integrated into join-3
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: join-3: Still waiting on 1 outstanding offers
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_REQ
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: Processing req from Cluster-Server-1
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: join-3: Welcoming node Cluster-Server-1 (ref join_request-crmd-1347283303-10)
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: 2 nodes have been integrated into join-3
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: check_join_state: join-3: Integration of 2 peers complete: do_dc_join_filter_offer
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_INTEGRATED: [ state=S_INTEGRATION cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes responded to the join offer.
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_START
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started Finalization Timer (I_ELECTION:1800000ms), src=68
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_FINALIZE
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_finalize: Finializing join-3 for 2 clients
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: info: do_dc_join_finalize: join-3: Syncing the CIB from Cluster-Server-2 to the rest of the cluster
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: debug: sync_our_cib: Syncing CIB to all peers
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/63, version=0.6.9): ok (rc=0)
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by finalize_sync_callback in state: S_FINALIZE_JOIN
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: check_join_state: join-3: Still waiting on 2 integrated nodes
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: finalize_sync_callback: Notifying 2 clients of join-3 results
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: finalize_join_for: join-3: ACK'ing join request from Cluster-Server-1, state member
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: finalize_join_for: join-3: ACK'ing join request from Cluster-Server-2, state member
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/64, version=0.6.10): ok (rc=0)
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: handle_request: Raising I_JOIN_RESULT: join-3
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/65, version=0.6.11): ok (rc=0)
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: info: do_dc_join_ack: join-3: Updating node state to member for Cluster-Server-1
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: info: erase_status_tag: Deleting xpath: //node_state[@uname='Cluster-Server-1']/lrm
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_ack: join-3: Registered callback for LRM update 67
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-1']/lrm (/cib/status/node_state[1]/lrm)
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_cl_join_finalize_respond: Confirming join join-3: join_ack_nack
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_cl_join_finalize_respond: join-3: Join complete.  Sending local LRM status to Cluster-Server-2
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_ack: Ignoring op=join_ack_nack message from Cluster-Server-2
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-1']/lrm (origin=local/crmd/66, version=0.6.12): ok (rc=0)
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: erase_xpath_callback: Deletion of "//node_state[@uname='Cluster-Server-1']/lrm": ok (rc=0)
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: join_update_complete_callback: Join update 67 complete
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by join_update_complete_callback in state: S_FINALIZE_JOIN
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: check_join_state: join-3: Still waiting on 1 finalized nodes
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: info: do_dc_join_ack: join-3: Updating node state to member for Cluster-Server-2
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: info: erase_status_tag: Deleting xpath: //node_state[@uname='Cluster-Server-2']/lrm
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_ack: join-3: Registered callback for LRM update 69
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-2']/lrm (/cib/status/node_state[2]/lrm)
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-2']/lrm (origin=local/crmd/68, version=0.6.14): ok (rc=0)
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: erase_xpath_callback: Deletion of "//node_state[@uname='Cluster-Server-2']/lrm": ok (rc=0)
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: join_update_complete_callback: Join update 69 complete
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by join_update_complete_callback in state: S_FINALIZE_JOIN
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: check_join_state: join-3 complete: join_update_complete_callback
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_FINALIZED: [ state=S_FINALIZE_JOIN cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: Managed write_cib_contents process 42564 exited with return code 0.
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-1: true (overwrite=true) hash_size=2
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-2: true (overwrite=true) hash_size=2
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes are eligible to run resources.
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_FINAL
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_final: Ensuring DC, quorum and node attributes are up-to-date
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: attrd_update_delegate: Sent update: (null)=(null) for localhost
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: crm_update_quorum: Updating quorum status to true (call=72)
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/70, version=0.6.16): ok (rc=0)
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_CANCEL
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_te_invoke: Cancelling the transition: inactive
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: do_te_invoke:162 - Triggered transition abort (complete=1) : Peer Cancelled
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 73: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.6.15 -> 0.6.16 (S_POLICY_ENGINE)
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.6.16 -> 0.6.17 (S_POLICY_ENGINE)
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/72, version=0.6.18): ok (rc=0)
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.6.17 -> 0.6.18 (S_POLICY_ENGINE)
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: info: unpack_config: Startup probes: enabled
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH timeout: 60000
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH of failed nodes is disabled
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Stop all active resources: false
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Default stickiness: 0
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: notice: unpack_config: On loss of CCM Quorum: Ignore
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: info: unpack_domains: Unpacking domains
Sep 10 15:21:43 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-probe_complete" name="probe_complete" value="true" />
Sep 10 15:21:43 Cluster-Server-2 attrd: [40195]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
Sep 10 15:21:43 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-1 is online
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-2 is online
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: NFS_Server [p_NFS_Server]
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: info: short_print:      Stopped: [ p_NFS_Server:0 p_NFS_Server:1 ]
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: iSCSI_Daemon [p_iSCSI_Daemon]
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: info: short_print:      Stopped: [ p_iSCSI_Daemon:0 p_iSCSI_Daemon:1 ]
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: PingD [p_PingD]
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: info: short_print:      Stopped: [ p_PingD:0 p_PingD:1 ]
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_NFS_Server:0
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_NFS_Server:1
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 NFS_Server instances of a possible 2
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_iSCSI_Daemon:0
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_iSCSI_Daemon:1
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 iSCSI_Daemon instances of a possible 2
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_PingD:0
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_PingD:1
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 PingD instances of a possible 2
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: debug: native_create_probe: Probing p_NFS_Server:0 on Cluster-Server-1 (Stopped)
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: debug: native_create_probe: Probing p_iSCSI_Daemon:0 on Cluster-Server-1 (Stopped)
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: debug: native_create_probe: Probing p_PingD:0 on Cluster-Server-1 (Stopped)
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
Sep 10 15:21:43 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-probe_complete" name="probe_complete" value="true" />
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: debug: native_create_probe: Probing p_NFS_Server:1 on Cluster-Server-2 (Stopped)
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: debug: native_create_probe: Probing p_iSCSI_Daemon:1 on Cluster-Server-2 (Stopped)
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: debug: native_create_probe: Probing p_PingD:1 on Cluster-Server-2 (Stopped)
Sep 10 15:21:43 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 18 for probe_complete=true passed
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: info: RecurringOp:  Start recurring monitor (30s) for p_NFS_Server:0 on Cluster-Server-1
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: info: RecurringOp:  Start recurring monitor (30s) for p_NFS_Server:1 on Cluster-Server-2
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: info: RecurringOp:  Start recurring monitor (30s) for p_iSCSI_Daemon:0 on Cluster-Server-1
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: info: RecurringOp:  Start recurring monitor (30s) for p_iSCSI_Daemon:1 on Cluster-Server-2
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: info: RecurringOp:  Start recurring monitor (10s) for p_PingD:0 on Cluster-Server-1
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: info: RecurringOp:  Start recurring monitor (10s) for p_PingD:1 on Cluster-Server-2
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: notice: LogActions: Start   p_NFS_Server:0	(Cluster-Server-1)
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: notice: LogActions: Start   p_NFS_Server:1	(Cluster-Server-2)
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: notice: LogActions: Start   p_iSCSI_Daemon:0	(Cluster-Server-1)
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: notice: LogActions: Start   p_iSCSI_Daemon:1	(Cluster-Server-2)
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: notice: LogActions: Start   p_PingD:0	(Cluster-Server-1)
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: notice: LogActions: Start   p_PingD:1	(Cluster-Server-2)
Sep 10 15:21:43 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 20 for probe_complete=true passed
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke_callback: Invoking the PE: query=73, ref=pe_calc-dc-1347283303-28, seq=312, quorate=1
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.6.18 -> 0.6.19 (S_POLICY_ENGINE)
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.6.19 -> 0.6.20 (S_POLICY_ENGINE)
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.6.20 -> 0.6.21 (S_POLICY_ENGINE)
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_PE_SUCCESS from handle_response() received in state S_POLICY_ENGINE
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log <create_reply_adv origin="process_pe_message" t="crmd" version="3.0.6" subt="response" reference="pe_calc-dc-1347283303-28" crm_task="pe_calc" crm_sys_to="dc" crm_sys_from="pengine" crm-tgraph-in="/var/lib/pengine/pe-input-2.bz2" graph-errors="0" graph-warnings="0" config-errors="0" config-warnings="0" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   <crm_xml >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log     <transition_graph cluster-delay="60s" stonith-timeout="60s" failed-stop-offset="INFINITY" failed-start-offset="INFINITY" batch-limit="30" transition_id="2" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="0" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="12" operation="monitor" operation_key="p_NFS_Server:0_monitor_30000" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="p_NFS_Server:0" long-id="NFS_Server:p_NFS_Server:0" class="lsb" type="nfs-kernel-server" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone="0" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_interval="30000" CRM_meta_name="monitor" CRM_meta_notify="false" CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="11" operation="start" operation_key="p_NFS_Server:0_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="1" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="11" operation="start" operation_key="p_NFS_Server:0_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="p_NFS_Server:0" long-id="NFS_Server:p_NFS_Server:0" class="lsb" type="nfs-kernel-server" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone="0" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_notify="false" CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="2" operation="probe_complete" operation_key="probe_complete" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="15" operation="start" operation_key="NFS_Server_start_0" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="2" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="4" operation="monitor" operation_key="p_NFS_Server:0_monitor_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="p_NFS_Server:0" long-id="NFS_Server:p_NFS_Server:0" class="lsb" type="nfs-kernel-server" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone="0" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_notify="false" CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="3" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="14" operation="monitor" operation_key="p_NFS_Server:1_monitor_30000" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="p_NFS_Server:1" long-id="NFS_Server:p_NFS_Server:1" class="lsb" type="nfs-kernel-server" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone="1" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_interval="30000" CRM_meta_name="monitor" CRM_meta_notify="false" CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="13" operation="start" operation_key="p_NFS_Server:1_start_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="4" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="13" operation="start" operation_key="p_NFS_Server:1_start_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="p_NFS_Server:1" long-id="NFS_Server:p_NFS_Server:1" class="lsb" type="nfs-kernel-server" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone="1" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_notify="false" CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="2" operation="probe_complete" operation_key="probe_complete" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="15" operation="start" operation_key="NFS_Server_start_0" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="5" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="8" operation="monitor" operation_key="p_NFS_Server:1_monitor_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="p_NFS_Server:1" long-id="NFS_Server:p_NFS_Server:1" class="lsb" type="nfs-kernel-server" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone="1" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_notify="false" CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="6" priority="1000000" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="16" operation="running" operation_key="NFS_Server_running_0" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_notify="false" CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="11" operation="start" operation_key="p_NFS_Server:0_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="13" operation="start" operation_key="p_NFS_Server:1_start_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="15" operation="start" operation_key="NFS_Server_start_0" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="7" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="15" operation="start" operation_key="NFS_Server_start_0" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_notify="false" CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="8" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="20" operation="monitor" operation_key="p_iSCSI_Daemon:0_monitor_30000" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="p_iSCSI_Daemon:0" long-id="iSCSI_Daemon:p_iSCSI_Daemon:0" class="lsb" type="iscsi-scst" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone="0" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_interval="30000" CRM_meta_name="monitor" CRM_meta_notify="false" CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="19" operation="start" operation_key="p_iSCSI_Daemon:0_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="9" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="19" operation="start" operation_key="p_iSCSI_Daemon:0_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="p_iSCSI_Daemon:0" long-id="iSCSI_Daemon:p_iSCSI_Daemon:0" class="lsb" type="iscsi-scst" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone="0" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_notify="false" CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="2" operation="probe_complete" operation_key="probe_complete" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="23" operation="start" operation_key="iSCSI_Daemon_start_0" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="10" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="5" operation="monitor" operation_key="p_iSCSI_Daemon:0_monitor_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="p_iSCSI_Daemon:0" long-id="iSCSI_Daemon:p_iSCSI_Daemon:0" class="lsb" type="iscsi-scst" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone="0" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_notify="false" CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="11" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="22" operation="monitor" operation_key="p_iSCSI_Daemon:1_monitor_30000" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="p_iSCSI_Daemon:1" long-id="iSCSI_Daemon:p_iSCSI_Daemon:1" class="lsb" type="iscsi-scst" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone="1" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_interval="30000" CRM_meta_name="monitor" CRM_meta_notify="false" CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="21" operation="start" operation_key="p_iSCSI_Daemon:1_start_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="12" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="21" operation="start" operation_key="p_iSCSI_Daemon:1_start_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="p_iSCSI_Daemon:1" long-id="iSCSI_Daemon:p_iSCSI_Daemon:1" class="lsb" type="iscsi-scst" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone="1" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_notify="false" CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="2" operation="probe_complete" operation_key="probe_complete" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="23" operation="start" operation_key="iSCSI_Daemon_start_0" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="13" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="9" operation="monitor" operation_key="p_iSCSI_Daemon:1_monitor_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="p_iSCSI_Daemon:1" long-id="iSCSI_Daemon:p_iSCSI_Daemon:1" class="lsb" type="iscsi-scst" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone="1" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_notify="false" CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="14" priority="1000000" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="24" operation="running" operation_key="iSCSI_Daemon_running_0" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_notify="false" CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="19" operation="start" operation_key="p_iSCSI_Daemon:0_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="21" operation="start" operation_key="p_iSCSI_Daemon:1_start_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="23" operation="start" operation_key="iSCSI_Daemon_start_0" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="15" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="23" operation="start" operation_key="iSCSI_Daemon_start_0" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_notify="false" CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="16" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="28" operation="monitor" operation_key="p_PingD:0_monitor_10000" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="p_PingD:0" long-id="PingD:p_PingD:0" class="ocf" provider="pacemaker" type="ping" />
Sep 10 15:21:43 Cluster-Server-2 pengine: [40196]: notice: process_pe_message: Transition 2: PEngine Input stored in: /var/lib/pengine/pe-input-2.bz2
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone="0" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_interval="10000" CRM_meta_name="monitor" CRM_meta_notify="false" CRM_meta_timeout="5000" crm_feature_set="3.0.6" host_list="192.168.1.1" multiplier="100" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="27" operation="start" operation_key="p_PingD:0_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="17" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="27" operation="start" operation_key="p_PingD:0_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="p_PingD:0" long-id="PingD:p_PingD:0" class="ocf" provider="pacemaker" type="ping" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone="0" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_notify="false" CRM_meta_timeout="20000" crm_feature_set="3.0.6" host_list="192.168.1.1" multiplier="100" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="2" operation="probe_complete" operation_key="probe_complete" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="31" operation="start" operation_key="PingD_start_0" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="18" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="6" operation="monitor" operation_key="p_PingD:0_monitor_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="p_PingD:0" long-id="PingD:p_PingD:0" class="ocf" provider="pacemaker" type="ping" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone="0" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_notify="false" CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" crm_feature_set="3.0.6" host_list="192.168.1.1" multiplier="100" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="19" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="30" operation="monitor" operation_key="p_PingD:1_monitor_10000" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="p_PingD:1" long-id="PingD:p_PingD:1" class="ocf" provider="pacemaker" type="ping" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone="1" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_interval="10000" CRM_meta_name="monitor" CRM_meta_notify="false" CRM_meta_timeout="5000" crm_feature_set="3.0.6" host_list="192.168.1.1" multiplier="100" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="29" operation="start" operation_key="p_PingD:1_start_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="20" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="29" operation="start" operation_key="p_PingD:1_start_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="p_PingD:1" long-id="PingD:p_PingD:1" class="ocf" provider="pacemaker" type="ping" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone="1" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_notify="false" CRM_meta_timeout="20000" crm_feature_set="3.0.6" host_list="192.168.1.1" multiplier="100" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="2" operation="probe_complete" operation_key="probe_complete" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="31" operation="start" operation_key="PingD_start_0" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="21" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="10" operation="monitor" operation_key="p_PingD:1_monitor_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="p_PingD:1" long-id="PingD:p_PingD:1" class="ocf" provider="pacemaker" type="ping" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone="1" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_notify="false" CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" crm_feature_set="3.0.6" host_list="192.168.1.1" multiplier="100" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="22" priority="1000000" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="32" operation="running" operation_key="PingD_running_0" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_notify="false" CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="27" operation="start" operation_key="p_PingD:0_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="29" operation="start" operation_key="p_PingD:1_start_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="31" operation="start" operation_key="PingD_start_0" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="23" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="31" operation="start" operation_key="PingD_start_0" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_notify="false" CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="24" priority="1000000" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="7" operation="probe_complete" operation_key="probe_complete" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_no_wait="true" crm_feature_set="3.0.6" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="8" operation="monitor" operation_key="p_NFS_Server:1_monitor_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="9" operation="monitor" operation_key="p_iSCSI_Daemon:1_monitor_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="10" operation="monitor" operation_key="p_PingD:1_monitor_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="25" priority="1000000" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="3" operation="probe_complete" operation_key="probe_complete" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_no_wait="true" crm_feature_set="3.0.6" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="4" operation="monitor" operation_key="p_NFS_Server:0_monitor_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="5" operation="monitor" operation_key="p_iSCSI_Daemon:0_monitor_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="6" operation="monitor" operation_key="p_PingD:0_monitor_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="26" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="2" operation="probe_complete" operation_key="probe_complete" >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes crm_feature_set="3.0.6" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="3" operation="probe_complete" operation_key="probe_complete" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="7" operation="probe_complete" operation_key="probe_complete" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" />
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log     </transition_graph>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   </crm_xml>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log </create_reply_adv>
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_INVOKE
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: unpack_graph: Unpacked transition 2: 27 actions in 27 synapses
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: info: do_te_invoke: Processing graph 2 (ref=pe_calc-dc-1347283303-28) derived from /var/lib/pengine/pe-input-2.bz2
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.6.21 -> 0.6.22 (S_TRANSITION_ENGINE)
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 4: monitor p_NFS_Server:0_monitor_0 on Cluster-Server-1
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 8: monitor p_NFS_Server:1_monitor_0 on Cluster-Server-2 (local)
Sep 10 15:21:43 Cluster-Server-2 lrmd: [40194]: debug: on_msg_add_rsc:client [40197] adds resource p_NFS_Server:1
Sep 10 15:21:43 Cluster-Server-2 lrmd: [40194]: notice: lrmd_rsc_new(): No lrm_rprovider field in message
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_lrm_rsc_op: Performing key=8:2:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=p_NFS_Server:1_monitor_0
Sep 10 15:21:43 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op:2399: copying parameters for rsc p_NFS_Server:1
Sep 10 15:21:43 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op: add an operation operation monitor[2] on p_NFS_Server:1 for client 40197, its parameters: crm_feature_set=[3.0.6] CRM_meta_notify=[false] CRM_meta_clone=[1] CRM_meta_globally_unique=[false] CRM_meta_clone_node_max=[1] CRM_meta_timeout=[20000] CRM_meta_clone_max=[2]  to the operation list.
Sep 10 15:21:43 Cluster-Server-2 lrmd: [40194]: info: rsc:p_NFS_Server:1 probe[2] (pid 42572)
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: te_pseudo_action: Pseudo action 15 fired and confirmed
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 5: monitor p_iSCSI_Daemon:0_monitor_0 on Cluster-Server-1
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 9: monitor p_iSCSI_Daemon:1_monitor_0 on Cluster-Server-2 (local)
Sep 10 15:21:43 Cluster-Server-2 lrmd: [40194]: debug: on_msg_add_rsc:client [40197] adds resource p_iSCSI_Daemon:1
Sep 10 15:21:43 Cluster-Server-2 lrmd: [40194]: notice: lrmd_rsc_new(): No lrm_rprovider field in message
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_lrm_rsc_op: Performing key=9:2:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=p_iSCSI_Daemon:1_monitor_0
Sep 10 15:21:43 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op:2399: copying parameters for rsc p_iSCSI_Daemon:1
Sep 10 15:21:43 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op: add an operation operation monitor[3] on p_iSCSI_Daemon:1 for client 40197, its parameters: crm_feature_set=[3.0.6] CRM_meta_notify=[false] CRM_meta_clone=[1] CRM_meta_globally_unique=[false] CRM_meta_clone_node_max=[1] CRM_meta_timeout=[20000] CRM_meta_clone_max=[2]  to the operation list.
Sep 10 15:21:43 Cluster-Server-2 lrmd: [40194]: info: rsc:p_iSCSI_Daemon:1 probe[3] (pid 42574)
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: te_pseudo_action: Pseudo action 23 fired and confirmed
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 6: monitor p_PingD:0_monitor_0 on Cluster-Server-1
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 10: monitor p_PingD:1_monitor_0 on Cluster-Server-2 (local)
Sep 10 15:21:43 Cluster-Server-2 lrmd: [40194]: debug: on_msg_add_rsc:client [40197] adds resource p_PingD:1
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: do_lrm_rsc_op: Performing key=10:2:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=p_PingD:1_monitor_0
Sep 10 15:21:43 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op:2399: copying parameters for rsc p_PingD:1
Sep 10 15:21:43 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op: add an operation operation monitor[4] on p_PingD:1 for client 40197, its parameters: CRM_meta_timeout=[20000] multiplier=[100] CRM_meta_clone_node_max=[1] CRM_meta_notify=[false] crm_feature_set=[3.0.6] CRM_meta_clone=[1] host_list=[192.168.1.1] CRM_meta_clone_max=[2] CRM_meta_globally_unique=[false]  to the operation list.
Sep 10 15:21:43 Cluster-Server-2 lrmd: [40194]: info: rsc:p_PingD:1 probe[4] (pid 42576)
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: te_pseudo_action: Pseudo action 31 fired and confirmed
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 2 (Complete=0, Pending=6, Fired=9, Skipped=0, Incomplete=18, Source=/var/lib/pengine/pe-input-2.bz2): In-progress
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 2 (Complete=3, Pending=6, Fired=0, Skipped=0, Incomplete=18, Source=/var/lib/pengine/pe-input-2.bz2): In-progress
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: debug: sync_our_cib: Syncing CIB to all peers
Sep 10 15:21:43 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=Cluster-Server-1/cibadmin/2, version=0.6.22): ok (rc=0)
Sep 10 15:21:43 Cluster-Server-2 lrmd: [40194]: WARN: Managed p_NFS_Server:1:monitor process 42572 exited with return code 3.
Sep 10 15:21:43 Cluster-Server-2 lrmd: [40194]: info: operation monitor[2] on p_NFS_Server:1 for client 40197: pid 42572 exited with return code 7 (mapped from 3)
Sep 10 15:21:43 Cluster-Server-2 lrmd: [40194]: debug: RA output: (p_NFS_Server:1:monitor:stdout) nfsd not running

Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: do_update_resource: Updating resouce p_NFS_Server:1 after complete monitor op (interval=0)
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: info: process_lrm_event: LRM operation p_NFS_Server:1_monitor_0 (call=2, rc=7, cib-update=74, confirmed=true) not running
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: update_history_cache: Appending monitor op to history for 'p_NFS_Server:1'
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.6.22 -> 0.6.23 (S_TRANSITION_ENGINE)
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action p_NFS_Server:1_monitor_0 (8) confirmed on Cluster-Server-2 (rc=0)
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 2 (Complete=4, Pending=5, Fired=0, Skipped=0, Incomplete=18, Source=/var/lib/pengine/pe-input-2.bz2): In-progress
Sep 10 15:21:43 Cluster-Server-2 lrmd: [40194]: info: Managed p_iSCSI_Daemon:1:monitor process 42574 exited with return code 0.
Sep 10 15:21:43 Cluster-Server-2 lrmd: [40194]: info: operation monitor[3] on p_iSCSI_Daemon:1 for client 40197: pid 42574 exited with return code 0
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: do_update_resource: Updating resouce p_iSCSI_Daemon:1 after complete monitor op (interval=0)
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: info: process_lrm_event: LRM operation p_iSCSI_Daemon:1_monitor_0 (call=3, rc=0, cib-update=75, confirmed=true) ok
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: update_history_cache: Appending monitor op to history for 'p_iSCSI_Daemon:1'
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.6.23 -> 0.6.24 (S_TRANSITION_ENGINE)
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: WARN: status_from_rc: Action 9 (p_iSCSI_Daemon:1_monitor_0) on Cluster-Server-2 failed (target: 7 vs. rc: 0): Error
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: match_graph_event:277 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_iSCSI_Daemon:1_last_failure_0, magic=0:0;9:2:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66, cib=0.6.24) : Event failed
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: update_abort_priority: Abort priority upgraded from 0 to 1
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: update_abort_priority: Abort action done superceeded by restart
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action p_iSCSI_Daemon:1_monitor_0 (9) confirmed on Cluster-Server-2 (rc=4)
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 2 (Complete=5, Pending=4, Fired=0, Skipped=13, Incomplete=5, Source=/var/lib/pengine/pe-input-2.bz2): In-progress
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.6.24 -> 0.6.25 (S_TRANSITION_ENGINE)
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: WARN: status_from_rc: Action 5 (p_iSCSI_Daemon:0_monitor_0) on Cluster-Server-1 failed (target: 7 vs. rc: 0): Error
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: match_graph_event:277 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_iSCSI_Daemon:0_last_failure_0, magic=0:0;5:2:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66, cib=0.6.25) : Event failed
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action p_iSCSI_Daemon:0_monitor_0 (5) confirmed on Cluster-Server-1 (rc=4)
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 2 (Complete=6, Pending=3, Fired=0, Skipped=13, Incomplete=5, Source=/var/lib/pengine/pe-input-2.bz2): In-progress
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.6.25 -> 0.6.26 (S_TRANSITION_ENGINE)
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action p_NFS_Server:0_monitor_0 (4) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:21:43 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 2 (Complete=7, Pending=2, Fired=0, Skipped=13, Incomplete=5, Source=/var/lib/pengine/pe-input-2.bz2): In-progress
Sep 10 15:21:44 Cluster-Server-1 cib: [48709]: debug: activateCibXml: Triggering CIB write for cib_replace op
Sep 10 15:21:44 Cluster-Server-1 cib: [48709]: debug: Forking temp process write_cib_contents
Sep 10 15:21:44 Cluster-Server-1 cib: [48709]: info: Managed write_cib_contents process 49392 exited with return code 0.
Sep 10 15:21:44 Cluster-Server-2 cib: [40192]: debug: sync_our_cib: Syncing CIB to all peers
Sep 10 15:21:44 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/cibadmin/2, version=0.6.26): ok (rc=0)
Sep 10 15:21:45 Cluster-Server-1 attrd_updater: [49395]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:21:45 Cluster-Server-1 attrd_updater: [49395]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:21:45 Cluster-Server-1 attrd_updater: [49395]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:21:45 Cluster-Server-1 attrd_updater: [49395]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:21:45 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:21:45 Cluster-Server-1 attrd: [48713]: info: find_hash_entry: Creating hash entry for pingd
Sep 10 15:21:45 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 100, Current: (null), Stored: (null)
Sep 10 15:21:45 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: New value of pingd is 100
Sep 10 15:21:45 Cluster-Server-1 lrmd: [48712]: info: Managed p_PingD:0:monitor process 49364 exited with return code 0.
Sep 10 15:21:45 Cluster-Server-1 lrmd: [48712]: info: operation monitor[4] on p_PingD:0 for client 48715: pid 49364 exited with return code 0
Sep 10 15:21:45 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce p_PingD:0 after complete monitor op (interval=0)
Sep 10 15:21:45 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation p_PingD:0_monitor_0 (call=4, rc=0, cib-update=17, confirmed=true) ok
Sep 10 15:21:45 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending monitor op to history for 'p_PingD:0'
Sep 10 15:21:45 Cluster-Server-1 crmd: [48715]: debug: attrd_update_delegate: Sent update: probe_complete=true for localhost
Sep 10 15:21:45 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crmd: probe_complete=true
Sep 10 15:21:45 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: true, Current: true, Stored: true
Sep 10 15:21:45 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=15:3:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=p_iSCSI_Daemon:0_monitor_30000
Sep 10 15:21:45 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation monitor[5] on p_iSCSI_Daemon:0 for client 48715, its parameters: CRM_meta_timeout=[20000] CRM_meta_name=[monitor] crm_feature_set=[3.0.6] CRM_meta_notify=[false] CRM_meta_clone_node_max=[1] CRM_meta_clone=[0] CRM_meta_clone_max=[2] CRM_meta_interval=[30000] CRM_meta_globally_unique=[false]  to the operation list.
Sep 10 15:21:45 Cluster-Server-1 lrmd: [48712]: info: rsc:p_iSCSI_Daemon:0 monitor[5] (pid 49403)
Sep 10 15:21:45 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=25:3:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=p_PingD:0_monitor_10000
Sep 10 15:21:45 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation monitor[6] on p_PingD:0 for client 48715, its parameters: CRM_meta_timeout=[5000] multiplier=[100] CRM_meta_name=[monitor] CRM_meta_clone_node_max=[1] CRM_meta_notify=[false] crm_feature_set=[3.0.6] CRM_meta_clone=[0] host_list=[192.168.1.1] CRM_meta_clone_max=[2] CRM_meta_interval=[10000] CRM_meta_globally_unique=[false]  to the operation list.
Sep 10 15:21:45 Cluster-Server-1 lrmd: [48712]: info: rsc:p_PingD:0 monitor[6] (pid 49404)
Sep 10 15:21:45 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=5:3:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=p_NFS_Server:0_start_0
Sep 10 15:21:45 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op:2399: copying parameters for rsc p_NFS_Server:0
Sep 10 15:21:45 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation start[7] on p_NFS_Server:0 for client 48715, its parameters: crm_feature_set=[3.0.6] CRM_meta_notify=[false] CRM_meta_clone=[0] CRM_meta_globally_unique=[false] CRM_meta_clone_node_max=[1] CRM_meta_clone_max=[2] CRM_meta_timeout=[20000]  to the operation list.
Sep 10 15:21:45 Cluster-Server-1 lrmd: [48712]: info: rsc:p_NFS_Server:0 start[7] (pid 49408)
Sep 10 15:21:45 Cluster-Server-1 lrmd: [49408]: WARN: For LSB init script, no additional parameters are needed.
Sep 10 15:21:45 Cluster-Server-1 lrmd: [48712]: info: Managed p_iSCSI_Daemon:0:monitor process 49403 exited with return code 0.
Sep 10 15:21:45 Cluster-Server-1 lrmd: [48712]: info: operation monitor[5] on p_iSCSI_Daemon:0 for client 48715: pid 49403 exited with return code 0
Sep 10 15:21:45 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce p_iSCSI_Daemon:0 after complete monitor op (interval=30000)
Sep 10 15:21:45 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation p_iSCSI_Daemon:0_monitor_30000 (call=5, rc=0, cib-update=18, confirmed=false) ok
Sep 10 15:21:45 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending monitor op to history for 'p_iSCSI_Daemon:0'
Sep 10 15:21:45 Cluster-Server-1 lrmd: [48712]: info: RA output: (p_NFS_Server:0:start:stdout) Exporting directories for NFS kernel daemon...
Sep 10 15:21:45 Cluster-Server-1 lrmd: [48712]: info: RA output: (p_NFS_Server:0:start:stdout) .

Sep 10 15:21:45 Cluster-Server-1 lrmd: [48712]: info: RA output: (p_NFS_Server:0:start:stdout) Starting NFS kernel daemon:
Sep 10 15:21:45 Cluster-Server-1 lrmd: [48712]: info: RA output: (p_NFS_Server:0:start:stdout)  nfsd
Sep 10 15:21:45 Cluster-Server-1 lrmd: [48712]: info: RA output: (p_NFS_Server:0:start:stdout)  mountd
Sep 10 15:21:45 Cluster-Server-1 lrmd: [48712]: info: RA output: (p_NFS_Server:0:start:stdout) .

Sep 10 15:21:45 Cluster-Server-1 lrmd: [48712]: info: Managed p_NFS_Server:0:start process 49408 exited with return code 0.
Sep 10 15:21:45 Cluster-Server-1 lrmd: [48712]: info: operation start[7] on p_NFS_Server:0 for client 48715: pid 49408 exited with return code 0
Sep 10 15:21:45 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce p_NFS_Server:0 after complete start op (interval=0)
Sep 10 15:21:45 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation p_NFS_Server:0_start_0 (call=7, rc=0, cib-update=19, confirmed=true) ok
Sep 10 15:21:45 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending start op to history for 'p_NFS_Server:0'
Sep 10 15:21:45 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=6:3:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=p_NFS_Server:0_monitor_30000
Sep 10 15:21:45 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation monitor[8] on p_NFS_Server:0 for client 48715, its parameters: CRM_meta_timeout=[20000] CRM_meta_name=[monitor] crm_feature_set=[3.0.6] CRM_meta_notify=[false] CRM_meta_clone_node_max=[1] CRM_meta_clone=[0] CRM_meta_clone_max=[2] CRM_meta_interval=[30000] CRM_meta_globally_unique=[false]  to the operation list.
Sep 10 15:21:45 Cluster-Server-1 lrmd: [48712]: info: rsc:p_NFS_Server:0 monitor[8] (pid 49448)
Sep 10 15:21:45 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_NFS_Server:0:monitor:stdout) nfsd running

Sep 10 15:21:45 Cluster-Server-1 lrmd: [48712]: info: Managed p_NFS_Server:0:monitor process 49448 exited with return code 0.
Sep 10 15:21:45 Cluster-Server-1 lrmd: [48712]: info: operation monitor[8] on p_NFS_Server:0 for client 48715: pid 49448 exited with return code 0
Sep 10 15:21:45 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce p_NFS_Server:0 after complete monitor op (interval=30000)
Sep 10 15:21:45 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation p_NFS_Server:0_monitor_30000 (call=8, rc=0, cib-update=20, confirmed=false) ok
Sep 10 15:21:45 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending monitor op to history for 'p_NFS_Server:0'
Sep 10 15:21:45 Cluster-Server-2 attrd_updater: [42969]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:21:45 Cluster-Server-2 attrd_updater: [42969]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:21:45 Cluster-Server-2 attrd_updater: [42969]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:21:45 Cluster-Server-2 attrd_updater: [42969]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:21:45 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:21:45 Cluster-Server-2 attrd: [40195]: info: find_hash_entry: Creating hash entry for pingd
Sep 10 15:21:45 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 100, Current: (null), Stored: (null)
Sep 10 15:21:45 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: New value of pingd is 100
Sep 10 15:21:45 Cluster-Server-2 lrmd: [40194]: info: Managed p_PingD:1:monitor process 42576 exited with return code 0.
Sep 10 15:21:45 Cluster-Server-2 lrmd: [40194]: info: operation monitor[4] on p_PingD:1 for client 40197: pid 42576 exited with return code 0
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: do_update_resource: Updating resouce p_PingD:1 after complete monitor op (interval=0)
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: info: process_lrm_event: LRM operation p_PingD:1_monitor_0 (call=4, rc=0, cib-update=76, confirmed=true) ok
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: update_history_cache: Appending monitor op to history for 'p_PingD:1'
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.6.26 -> 0.6.27 (S_TRANSITION_ENGINE)
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: WARN: status_from_rc: Action 10 (p_PingD:1_monitor_0) on Cluster-Server-2 failed (target: 7 vs. rc: 0): Error
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: match_graph_event:277 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_PingD:1_last_failure_0, magic=0:0;10:2:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66, cib=0.6.27) : Event failed
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action p_PingD:1_monitor_0 (10) confirmed on Cluster-Server-2 (rc=4)
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 7: probe_complete probe_complete on Cluster-Server-2 (local) - no waiting
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: attrd_update_delegate: Sent update: probe_complete=true for localhost
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 2 (Complete=8, Pending=1, Fired=1, Skipped=13, Incomplete=4, Source=/var/lib/pengine/pe-input-2.bz2): In-progress
Sep 10 15:21:45 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from crmd: probe_complete=true
Sep 10 15:21:45 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: true, Current: true, Stored: true
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 2 (Complete=9, Pending=1, Fired=0, Skipped=13, Incomplete=4, Source=/var/lib/pengine/pe-input-2.bz2): In-progress
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.6.27 -> 0.6.28 (S_TRANSITION_ENGINE)
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: WARN: status_from_rc: Action 6 (p_PingD:0_monitor_0) on Cluster-Server-1 failed (target: 7 vs. rc: 0): Error
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: match_graph_event:277 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_PingD:0_last_failure_0, magic=0:0;6:2:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66, cib=0.6.28) : Event failed
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action p_PingD:0_monitor_0 (6) confirmed on Cluster-Server-1 (rc=4)
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 3: probe_complete probe_complete on Cluster-Server-1 - no waiting
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 2 (Complete=10, Pending=0, Fired=1, Skipped=13, Incomplete=3, Source=/var/lib/pengine/pe-input-2.bz2): In-progress
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: notice: run_graph: ==== Transition 2 (Complete=11, Pending=0, Fired=0, Skipped=13, Incomplete=3, Source=/var/lib/pengine/pe-input-2.bz2): Stopped
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: te_graph_trigger: Transition 2 is now complete
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Transition 2 status: restart - Event failed
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes are eligible to run resources.
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 77: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke_callback: Invoking the PE: query=77, ref=pe_calc-dc-1347283305-37, seq=312, quorate=1
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: info: unpack_config: Startup probes: enabled
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH timeout: 60000
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH of failed nodes is disabled
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Stop all active resources: false
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Default stickiness: 0
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: notice: unpack_config: On loss of CCM Quorum: Ignore
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: info: unpack_domains: Unpacking domains
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-1 is online
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-2 is online
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: NFS_Server [p_NFS_Server]
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: info: short_print:      Stopped: [ p_NFS_Server:0 p_NFS_Server:1 ]
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: iSCSI_Daemon [p_iSCSI_Daemon]
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: PingD [p_PingD]
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:0: preferring current location (node=Cluster-Server-1, weight=1)
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:1: preferring current location (node=Cluster-Server-2, weight=1)
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_NFS_Server:0
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_NFS_Server:1
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 NFS_Server instances of a possible 2
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_iSCSI_Daemon:0
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_iSCSI_Daemon:1
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 iSCSI_Daemon instances of a possible 2
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_PingD:0
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_PingD:1
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 PingD instances of a possible 2
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: info: RecurringOp:  Start recurring monitor (30s) for p_NFS_Server:0 on Cluster-Server-1
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: info: RecurringOp:  Start recurring monitor (30s) for p_NFS_Server:1 on Cluster-Server-2
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: info: RecurringOp:  Start recurring monitor (30s) for p_iSCSI_Daemon:0 on Cluster-Server-1
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: info: RecurringOp:  Start recurring monitor (30s) for p_iSCSI_Daemon:1 on Cluster-Server-2
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: info: RecurringOp:  Start recurring monitor (10s) for p_PingD:0 on Cluster-Server-1
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: info: RecurringOp:  Start recurring monitor (10s) for p_PingD:1 on Cluster-Server-2
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: notice: LogActions: Start   p_NFS_Server:0	(Cluster-Server-1)
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: notice: LogActions: Start   p_NFS_Server:1	(Cluster-Server-2)
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:0	(Started Cluster-Server-1)
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:1	(Started Cluster-Server-2)
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:0	(Started Cluster-Server-1)
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:1	(Started Cluster-Server-2)
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_PE_SUCCESS from handle_response() received in state S_POLICY_ENGINE
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log <create_reply_adv origin="process_pe_message" t="crmd" version="3.0.6" subt="response" reference="pe_calc-dc-1347283305-37" crm_task="pe_calc" crm_sys_to="dc" crm_sys_from="pengine" crm-tgraph-in="/var/lib/pengine/pe-input-3.bz2" graph-errors="0" graph-warnings="0" config-errors="0" config-warnings="0" >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   <crm_xml >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log     <transition_graph cluster-delay="60s" stonith-timeout="60s" failed-stop-offset="INFINITY" failed-start-offset="INFINITY" batch-limit="30" transition_id="3" >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="0" >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="6" operation="monitor" operation_key="p_NFS_Server:0_monitor_30000" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="p_NFS_Server:0" long-id="NFS_Server:p_NFS_Server:0" class="lsb" type="nfs-kernel-server" />
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone="0" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_interval="30000" CRM_meta_name="monitor" CRM_meta_notify="false" CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="5" operation="start" operation_key="p_NFS_Server:0_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="1" >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="5" operation="start" operation_key="p_NFS_Server:0_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="p_NFS_Server:0" long-id="NFS_Server:p_NFS_Server:0" class="lsb" type="nfs-kernel-server" />
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone="0" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_notify="false" CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="9" operation="start" operation_key="NFS_Server_start_0" />
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="2" >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="8" operation="monitor" operation_key="p_NFS_Server:1_monitor_30000" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="p_NFS_Server:1" long-id="NFS_Server:p_NFS_Server:1" class="lsb" type="nfs-kernel-server" />
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone="1" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_interval="30000" CRM_meta_name="monitor" CRM_meta_notify="false" CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="7" operation="start" operation_key="p_NFS_Server:1_start_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" />
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="3" >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="7" operation="start" operation_key="p_NFS_Server:1_start_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="p_NFS_Server:1" long-id="NFS_Server:p_NFS_Server:1" class="lsb" type="nfs-kernel-server" />
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone="1" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_notify="false" CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="9" operation="start" operation_key="NFS_Server_start_0" />
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="4" priority="1000000" >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="10" operation="running" operation_key="NFS_Server_running_0" >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_notify="false" CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="5" operation="start" operation_key="p_NFS_Server:0_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="7" operation="start" operation_key="p_NFS_Server:1_start_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" />
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="9" operation="start" operation_key="NFS_Server_start_0" />
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="5" >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="9" operation="start" operation_key="NFS_Server_start_0" >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_notify="false" CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="6" >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="15" operation="monitor" operation_key="p_iSCSI_Daemon:0_monitor_30000" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="p_iSCSI_Daemon:0" long-id="iSCSI_Daemon:p_iSCSI_Daemon:0" class="lsb" type="iscsi-scst" />
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone="0" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_interval="30000" CRM_meta_name="monitor" CRM_meta_notify="false" CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:21:45 Cluster-Server-2 pengine: [40196]: notice: process_pe_message: Transition 3: PEngine Input stored in: /var/lib/pengine/pe-input-3.bz2
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="7" >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="18" operation="monitor" operation_key="p_iSCSI_Daemon:1_monitor_30000" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="p_iSCSI_Daemon:1" long-id="iSCSI_Daemon:p_iSCSI_Daemon:1" class="lsb" type="iscsi-scst" />
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone="1" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_interval="30000" CRM_meta_name="monitor" CRM_meta_notify="false" CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="8" >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="25" operation="monitor" operation_key="p_PingD:0_monitor_10000" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="p_PingD:0" long-id="PingD:p_PingD:0" class="ocf" provider="pacemaker" type="ping" />
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone="0" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_interval="10000" CRM_meta_name="monitor" CRM_meta_notify="false" CRM_meta_timeout="5000" crm_feature_set="3.0.6" host_list="192.168.1.1" multiplier="100" />
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="9" >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="28" operation="monitor" operation_key="p_PingD:1_monitor_10000" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="p_PingD:1" long-id="PingD:p_PingD:1" class="ocf" provider="pacemaker" type="ping" />
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone="1" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_interval="10000" CRM_meta_name="monitor" CRM_meta_notify="false" CRM_meta_timeout="5000" crm_feature_set="3.0.6" host_list="192.168.1.1" multiplier="100" />
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log     </transition_graph>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   </crm_xml>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log </create_reply_adv>
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_INVOKE
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: unpack_graph: Unpacked transition 3: 10 actions in 10 synapses
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: info: do_te_invoke: Processing graph 3 (ref=pe_calc-dc-1347283305-37) derived from /var/lib/pengine/pe-input-3.bz2
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: te_pseudo_action: Pseudo action 9 fired and confirmed
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 15: monitor p_iSCSI_Daemon:0_monitor_30000 on Cluster-Server-1
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 18: monitor p_iSCSI_Daemon:1_monitor_30000 on Cluster-Server-2 (local)
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_lrm_rsc_op: Performing key=18:3:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=p_iSCSI_Daemon:1_monitor_30000
Sep 10 15:21:45 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op: add an operation operation monitor[5] on p_iSCSI_Daemon:1 for client 40197, its parameters: CRM_meta_timeout=[20000] CRM_meta_name=[monitor] crm_feature_set=[3.0.6] CRM_meta_notify=[false] CRM_meta_clone_node_max=[1] CRM_meta_clone=[1] CRM_meta_clone_max=[2] CRM_meta_interval=[30000] CRM_meta_globally_unique=[false]  to the operation list.
Sep 10 15:21:45 Cluster-Server-2 lrmd: [40194]: info: rsc:p_iSCSI_Daemon:1 monitor[5] (pid 42977)
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 25: monitor p_PingD:0_monitor_10000 on Cluster-Server-1
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 28: monitor p_PingD:1_monitor_10000 on Cluster-Server-2 (local)
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_lrm_rsc_op: Performing key=28:3:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=p_PingD:1_monitor_10000
Sep 10 15:21:45 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op: add an operation operation monitor[6] on p_PingD:1 for client 40197, its parameters: CRM_meta_timeout=[5000] multiplier=[100] CRM_meta_name=[monitor] CRM_meta_clone_node_max=[1] CRM_meta_notify=[false] crm_feature_set=[3.0.6] CRM_meta_clone=[1] host_list=[192.168.1.1] CRM_meta_clone_max=[2] CRM_meta_interval=[10000] CRM_meta_globally_unique=[false]  to the operation list.
Sep 10 15:21:45 Cluster-Server-2 lrmd: [40194]: info: rsc:p_PingD:1 monitor[6] (pid 42978)
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 3 (Complete=0, Pending=4, Fired=5, Skipped=0, Incomplete=5, Source=/var/lib/pengine/pe-input-3.bz2): In-progress
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 5: start p_NFS_Server:0_start_0 on Cluster-Server-1
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 7: start p_NFS_Server:1_start_0 on Cluster-Server-2 (local)
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_lrm_rsc_op: Performing key=7:3:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=p_NFS_Server:1_start_0
Sep 10 15:21:45 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op:2399: copying parameters for rsc p_NFS_Server:1
Sep 10 15:21:45 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op: add an operation operation start[7] on p_NFS_Server:1 for client 40197, its parameters: crm_feature_set=[3.0.6] CRM_meta_notify=[false] CRM_meta_clone=[1] CRM_meta_globally_unique=[false] CRM_meta_clone_node_max=[1] CRM_meta_clone_max=[2] CRM_meta_timeout=[20000]  to the operation list.
Sep 10 15:21:45 Cluster-Server-2 lrmd: [40194]: info: rsc:p_NFS_Server:1 start[7] (pid 42979)
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 3 (Complete=1, Pending=6, Fired=2, Skipped=0, Incomplete=3, Source=/var/lib/pengine/pe-input-3.bz2): In-progress
Sep 10 15:21:45 Cluster-Server-2 lrmd: [42979]: WARN: For LSB init script, no additional parameters are needed.
Sep 10 15:21:45 Cluster-Server-2 lrmd: [40194]: info: RA output: (p_NFS_Server:1:start:stdout) Exporting directories for NFS kernel daemon...
Sep 10 15:21:45 Cluster-Server-2 lrmd: [40194]: info: RA output: (p_NFS_Server:1:start:stdout) .

Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.6.28 -> 0.6.29 (S_TRANSITION_ENGINE)
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action p_iSCSI_Daemon:0_monitor_30000 (15) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 3 (Complete=2, Pending=5, Fired=0, Skipped=0, Incomplete=3, Source=/var/lib/pengine/pe-input-3.bz2): In-progress
Sep 10 15:21:45 Cluster-Server-2 lrmd: [40194]: info: RA output: (p_NFS_Server:1:start:stdout) Starting NFS kernel daemon:
Sep 10 15:21:45 Cluster-Server-2 lrmd: [40194]: info: Managed p_iSCSI_Daemon:1:monitor process 42977 exited with return code 0.
Sep 10 15:21:45 Cluster-Server-2 lrmd: [40194]: info: operation monitor[5] on p_iSCSI_Daemon:1 for client 40197: pid 42977 exited with return code 0
Sep 10 15:21:45 Cluster-Server-2 lrmd: [40194]: info: RA output: (p_NFS_Server:1:start:stdout)  nfsd
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: do_update_resource: Updating resouce p_iSCSI_Daemon:1 after complete monitor op (interval=30000)
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: info: process_lrm_event: LRM operation p_iSCSI_Daemon:1_monitor_30000 (call=5, rc=0, cib-update=78, confirmed=false) ok
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: update_history_cache: Appending monitor op to history for 'p_iSCSI_Daemon:1'
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.6.29 -> 0.6.30 (S_TRANSITION_ENGINE)
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action p_iSCSI_Daemon:1_monitor_30000 (18) confirmed on Cluster-Server-2 (rc=0)
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 3 (Complete=3, Pending=4, Fired=0, Skipped=0, Incomplete=3, Source=/var/lib/pengine/pe-input-3.bz2): In-progress
Sep 10 15:21:45 Cluster-Server-2 lrmd: [40194]: info: RA output: (p_NFS_Server:1:start:stdout)  mountd
Sep 10 15:21:45 Cluster-Server-2 lrmd: [40194]: info: RA output: (p_NFS_Server:1:start:stdout) .

Sep 10 15:21:45 Cluster-Server-2 lrmd: [40194]: info: Managed p_NFS_Server:1:start process 42979 exited with return code 0.
Sep 10 15:21:45 Cluster-Server-2 lrmd: [40194]: info: operation start[7] on p_NFS_Server:1 for client 40197: pid 42979 exited with return code 0
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: do_update_resource: Updating resouce p_NFS_Server:1 after complete start op (interval=0)
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: info: process_lrm_event: LRM operation p_NFS_Server:1_start_0 (call=7, rc=0, cib-update=79, confirmed=true) ok
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: update_history_cache: Appending start op to history for 'p_NFS_Server:1'
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.6.30 -> 0.6.31 (S_TRANSITION_ENGINE)
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action p_NFS_Server:1_start_0 (7) confirmed on Cluster-Server-2 (rc=0)
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 8: monitor p_NFS_Server:1_monitor_30000 on Cluster-Server-2 (local)
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: do_lrm_rsc_op: Performing key=8:3:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=p_NFS_Server:1_monitor_30000
Sep 10 15:21:45 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op: add an operation operation monitor[8] on p_NFS_Server:1 for client 40197, its parameters: CRM_meta_timeout=[20000] CRM_meta_name=[monitor] crm_feature_set=[3.0.6] CRM_meta_notify=[false] CRM_meta_clone_node_max=[1] CRM_meta_clone=[1] CRM_meta_clone_max=[2] CRM_meta_interval=[30000] CRM_meta_globally_unique=[false]  to the operation list.
Sep 10 15:21:45 Cluster-Server-2 lrmd: [40194]: info: rsc:p_NFS_Server:1 monitor[8] (pid 43022)
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 3 (Complete=4, Pending=4, Fired=1, Skipped=0, Incomplete=2, Source=/var/lib/pengine/pe-input-3.bz2): In-progress
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.6.31 -> 0.6.32 (S_TRANSITION_ENGINE)
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action p_NFS_Server:0_start_0 (5) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 6: monitor p_NFS_Server:0_monitor_30000 on Cluster-Server-1
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: te_pseudo_action: Pseudo action 10 fired and confirmed
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 3 (Complete=5, Pending=4, Fired=2, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-3.bz2): In-progress
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 3 (Complete=6, Pending=4, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-3.bz2): In-progress
Sep 10 15:21:45 Cluster-Server-2 lrmd: [40194]: debug: RA output: (p_NFS_Server:1:monitor:stdout) nfsd running

Sep 10 15:21:45 Cluster-Server-2 lrmd: [40194]: info: Managed p_NFS_Server:1:monitor process 43022 exited with return code 0.
Sep 10 15:21:45 Cluster-Server-2 lrmd: [40194]: info: operation monitor[8] on p_NFS_Server:1 for client 40197: pid 43022 exited with return code 0
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: do_update_resource: Updating resouce p_NFS_Server:1 after complete monitor op (interval=30000)
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: info: process_lrm_event: LRM operation p_NFS_Server:1_monitor_30000 (call=8, rc=0, cib-update=80, confirmed=false) ok
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: update_history_cache: Appending monitor op to history for 'p_NFS_Server:1'
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.6.32 -> 0.6.33 (S_TRANSITION_ENGINE)
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action p_NFS_Server:1_monitor_30000 (8) confirmed on Cluster-Server-2 (rc=0)
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 3 (Complete=7, Pending=3, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-3.bz2): In-progress
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.6.33 -> 0.6.34 (S_TRANSITION_ENGINE)
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action p_NFS_Server:0_monitor_30000 (6) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:21:45 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 3 (Complete=8, Pending=2, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-3.bz2): In-progress
Sep 10 15:21:47 Cluster-Server-1 attrd_updater: [49509]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:21:47 Cluster-Server-1 attrd_updater: [49509]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:21:47 Cluster-Server-1 attrd_updater: [49509]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:21:47 Cluster-Server-1 attrd_updater: [49509]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:21:47 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:21:47 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: (null)
Sep 10 15:21:47 Cluster-Server-1 lrmd: [48712]: info: Managed p_PingD:0:monitor process 49404 exited with return code 0.
Sep 10 15:21:47 Cluster-Server-1 lrmd: [48712]: info: operation monitor[6] on p_PingD:0 for client 48715: pid 49404 exited with return code 0
Sep 10 15:21:47 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce p_PingD:0 after complete monitor op (interval=10000)
Sep 10 15:21:47 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation p_PingD:0_monitor_10000 (call=6, rc=0, cib-update=21, confirmed=false) ok
Sep 10 15:21:47 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending monitor op to history for 'p_PingD:0'
Sep 10 15:21:47 Cluster-Server-2 attrd_updater: [43059]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:21:47 Cluster-Server-2 attrd_updater: [43059]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:21:47 Cluster-Server-2 attrd_updater: [43059]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:21:47 Cluster-Server-2 attrd_updater: [43059]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:21:47 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:21:47 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: (null)
Sep 10 15:21:47 Cluster-Server-2 attrd: [40195]: debug: xmlfromIPC: Peer disconnected
Sep 10 15:21:47 Cluster-Server-2 lrmd: [40194]: info: Managed p_PingD:1:monitor process 42978 exited with return code 0.
Sep 10 15:21:47 Cluster-Server-2 lrmd: [40194]: info: operation monitor[6] on p_PingD:1 for client 40197: pid 42978 exited with return code 0
Sep 10 15:21:47 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: do_update_resource: Updating resouce p_PingD:1 after complete monitor op (interval=10000)
Sep 10 15:21:47 Cluster-Server-2 crmd: [40197]: info: process_lrm_event: LRM operation p_PingD:1_monitor_10000 (call=6, rc=0, cib-update=81, confirmed=false) ok
Sep 10 15:21:47 Cluster-Server-2 crmd: [40197]: debug: update_history_cache: Appending monitor op to history for 'p_PingD:1'
Sep 10 15:21:47 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:47 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.6.34 -> 0.6.35 (S_TRANSITION_ENGINE)
Sep 10 15:21:47 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action p_PingD:0_monitor_10000 (25) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:21:47 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 3 (Complete=9, Pending=1, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-3.bz2): In-progress
Sep 10 15:21:47 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:47 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.6.35 -> 0.6.36 (S_TRANSITION_ENGINE)
Sep 10 15:21:47 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action p_PingD:1_monitor_10000 (28) confirmed on Cluster-Server-2 (rc=0)
Sep 10 15:21:47 Cluster-Server-2 crmd: [40197]: notice: run_graph: ==== Transition 3 (Complete=10, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-3.bz2): Complete
Sep 10 15:21:47 Cluster-Server-2 crmd: [40197]: debug: te_graph_trigger: Transition 3 is now complete
Sep 10 15:21:47 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE
Sep 10 15:21:47 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Transition 3 status: done - <null>
Sep 10 15:21:47 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_TE_SUCCESS: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:21:47 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:21:47 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_TE_SUCCESS from notify_crmd() received in state S_TRANSITION_ENGINE
Sep 10 15:21:47 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:21:47 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: Starting PEngine Recheck Timer
Sep 10 15:21:47 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started PEngine Recheck Timer (I_PE_CALC:900000ms), src=100
Sep 10 15:21:47 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:21:47 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:21:47 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:21:50 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='pingd'] does not exist
Sep 10 15:21:50 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for /cib (/cib)
Sep 10 15:21:50 Cluster-Server-1 attrd: [48713]: notice: attrd_perform_update: Sent update 24: pingd=100
Sep 10 15:21:50 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 24 for pingd=100 passed
Sep 10 15:21:50 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: pingd (100)
Sep 10 15:21:50 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='pingd'] does not exist
Sep 10 15:21:50 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for /cib (/cib)
Sep 10 15:21:50 Cluster-Server-2 attrd: [40195]: notice: attrd_perform_update: Sent update 23: pingd=100
Sep 10 15:21:50 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 23 for pingd=100 passed
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.6.36 -> 0.6.37 (S_IDLE)
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:176 - Triggered transition abort (complete=1, tag=nvpair, id=status-Cluster-Server-2-pingd, name=pingd, value=100, magic=NA, cib=0.6.37) : Transient attribute: update
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause <nvpair id="status-Cluster-Server-2-pingd" name="pingd" value="100" __crm_diff_marker__="added:top" />
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_IDLE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes are eligible to run resources.
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 82: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.6.37 -> 0.6.38 (S_POLICY_ENGINE)
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:176 - Triggered transition abort (complete=1, tag=nvpair, id=status-Cluster-Server-1-pingd, name=pingd, value=100, magic=NA, cib=0.6.38) : Transient attribute: update
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause <nvpair id="status-Cluster-Server-1-pingd" name="pingd" value="100" __crm_diff_marker__="added:top" />
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 83: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke_callback: Invoking the PE: query=83, ref=pe_calc-dc-1347283310-46, seq=312, quorate=1
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: info: unpack_config: Startup probes: enabled
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH timeout: 60000
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH of failed nodes is disabled
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Stop all active resources: false
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Default stickiness: 0
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: notice: unpack_config: On loss of CCM Quorum: Ignore
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: info: unpack_domains: Unpacking domains
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-1 is online
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-2 is online
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: NFS_Server [p_NFS_Server]
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: iSCSI_Daemon [p_iSCSI_Daemon]
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: PingD [p_PingD]
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:0: preferring current location (node=Cluster-Server-1, weight=1)
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:1: preferring current location (node=Cluster-Server-2, weight=1)
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_NFS_Server:0
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_NFS_Server:1
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 NFS_Server instances of a possible 2
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_iSCSI_Daemon:0
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_iSCSI_Daemon:1
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 iSCSI_Daemon instances of a possible 2
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_PingD:0
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_PingD:1
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 PingD instances of a possible 2
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:0	(Started Cluster-Server-1)
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:1	(Started Cluster-Server-2)
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:0	(Started Cluster-Server-1)
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:1	(Started Cluster-Server-2)
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:0	(Started Cluster-Server-1)
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:1	(Started Cluster-Server-2)
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_PE_SUCCESS from handle_response() received in state S_POLICY_ENGINE
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log <create_reply_adv origin="process_pe_message" t="crmd" version="3.0.6" subt="response" reference="pe_calc-dc-1347283310-46" crm_task="pe_calc" crm_sys_to="dc" crm_sys_from="pengine" crm-tgraph-in="/var/lib/pengine/pe-input-4.bz2" graph-errors="0" graph-warnings="0" config-errors="0" config-warnings="0" >
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   <crm_xml >
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log     <transition_graph cluster-delay="60s" stonith-timeout="60s" failed-stop-offset="INFINITY" failed-start-offset="INFINITY" batch-limit="30" transition_id="4" />
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   </crm_xml>
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log </create_reply_adv>
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_INVOKE
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: debug: unpack_graph: Unpacked transition 4: 0 actions in 0 synapses
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: info: do_te_invoke: Processing graph 4 (ref=pe_calc-dc-1347283310-46) derived from /var/lib/pengine/pe-input-4.bz2
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: debug: print_graph: ## Empty transition graph ##
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: notice: run_graph: ==== Transition 4 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-4.bz2): Complete
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: debug: print_graph: ## Empty transition graph ##
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: debug: te_graph_trigger: Transition 4 is now complete
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Transition 4 status: done - <null>
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_TE_SUCCESS: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_TE_SUCCESS from notify_crmd() received in state S_TRANSITION_ENGINE
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: Starting PEngine Recheck Timer
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started PEngine Recheck Timer (I_PE_CALC:900000ms), src=103
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:21:50 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:21:50 Cluster-Server-2 pengine: [40196]: notice: process_pe_message: Transition 4: PEngine Input stored in: /var/lib/pengine/pe-input-4.bz2
Sep 10 15:21:57 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_PingD:0 monitor[6] (pid 50535)
Sep 10 15:21:57 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_PingD:1 monitor[6] (pid 43862)
Sep 10 15:21:59 Cluster-Server-1 attrd_updater: [50553]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:21:59 Cluster-Server-1 attrd_updater: [50553]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:21:59 Cluster-Server-1 attrd_updater: [50553]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:21:59 Cluster-Server-1 attrd_updater: [50553]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:21:59 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:21:59 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:21:59 Cluster-Server-1 lrmd: [48712]: info: operation monitor[6] on p_PingD:0 for client 48715: pid 50535 exited with return code 0
Sep 10 15:21:59 Cluster-Server-2 attrd_updater: [43915]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:21:59 Cluster-Server-2 attrd_updater: [43915]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:21:59 Cluster-Server-2 attrd_updater: [43915]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:21:59 Cluster-Server-2 attrd_updater: [43915]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:21:59 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:21:59 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:21:59 Cluster-Server-2 lrmd: [40194]: info: operation monitor[6] on p_PingD:1 for client 40197: pid 43862 exited with return code 0
Sep 10 15:22:06 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [50894] registered
Sep 10 15:22:06 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:50894] disconnected.
Sep 10 15:22:06 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:50894] is unregistered
Sep 10 15:22:06 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [50896] registered
Sep 10 15:22:06 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:50896] disconnected.
Sep 10 15:22:06 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:50896] is unregistered
Sep 10 15:22:06 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [50898] registered
Sep 10 15:22:06 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:50898] disconnected.
Sep 10 15:22:06 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:50898] is unregistered
Sep 10 15:22:06 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [50900] registered
Sep 10 15:22:06 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:50900] disconnected.
Sep 10 15:22:06 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:50900] is unregistered
Sep 10 15:22:06 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [50909] registered
Sep 10 15:22:06 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:50909] disconnected.
Sep 10 15:22:06 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:50909] is unregistered
Sep 10 15:22:06 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [50918] registered
Sep 10 15:22:06 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:50918] disconnected.
Sep 10 15:22:06 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:50918] is unregistered
Sep 10 15:22:06 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Sep 10 15:22:06 Cluster-Server-1 crmd: [48715]: debug: crm_uptime: Current CPU usage is: 0s, 20000us
Sep 10 15:22:06 Cluster-Server-1 crmd: [48715]: debug: crm_compare_age: Loose: 20000 vs 100000 (usec)
Sep 10 15:22:06 Cluster-Server-1 crmd: [48715]: info: do_election_count_vote: Election 6 (owner: Cluster-Server-2) lost: vote from Cluster-Server-2 (Uptime)
Sep 10 15:22:06 Cluster-Server-1 crmd: [48715]: debug: update_dc: Unset DC. Was Cluster-Server-2
Sep 10 15:22:06 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Sep 10 15:22:06 Cluster-Server-1 crmd: [48715]: debug: do_election_check: Ignore election check: we not in an election
Sep 10 15:22:06 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_PENDING: [ state=S_NOT_DC cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Sep 10 15:22:06 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:22:06 Cluster-Server-1 crmd: [48715]: debug: do_log: FSA: Input I_PENDING from do_election_count_vote() received in state S_NOT_DC
Sep 10 15:22:06 Cluster-Server-1 crmd: [48715]: notice: do_state_transition: State transition S_NOT_DC -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Sep 10 15:22:06 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:22:06 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:22:06 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:22:06 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_START
Sep 10 15:22:06 Cluster-Server-1 crmd: [48715]: debug: crm_timer_start: Started Election Trigger (I_DC_TIMEOUT:20000ms), src=32
Sep 10 15:22:06 Cluster-Server-1 cib: [48709]: debug: activateCibXml: Triggering CIB write for cib_apply_diff op
Sep 10 15:22:06 Cluster-Server-1 cib: [48709]: debug: Forking temp process write_cib_contents
Sep 10 15:22:06 Cluster-Server-1 crmd: [48715]: debug: handle_request: Raising I_JOIN_OFFER: join-4
Sep 10 15:22:06 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_JOIN_OFFER: [ state=S_PENDING cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:22:06 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:22:06 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_REQUEST
Sep 10 15:22:06 Cluster-Server-1 crmd: [48715]: info: update_dc: Set DC to Cluster-Server-2 (3.0.6)
Sep 10 15:22:06 Cluster-Server-1 crmd: [48715]: debug: do_cl_join_offer_respond: do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Sep 10 15:22:06 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: debug: activateCibXml: Triggering CIB write for cib_replace op
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_replace): 0.6.38 -> 0.7.1 (S_IDLE)
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.7.1) : Non-status change
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause <diff crm_feature_set="3.0.6" >
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   <diff-removed admin_epoch="0" epoch="6" num_updates="38" >
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     <cib admin_epoch="0" epoch="6" num_updates="38" />
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   </diff-removed>
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   <diff-added >
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     <cib epoch="7" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="Cluster-Server-1" update-client="cibadmin" cib-last-written="Mon Sep 10 15:21:43 2012" have-quorum="1" dc-uuid="Cluster-Server-2" >
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       <configuration >
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         <resources >
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <master id="Device_drive" __crm_diff_marker__="added:top" >
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <meta_attributes id="Device_drive-meta_attributes" >
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <nvpair id="Device_drive-meta_attributes-notify" name="notify" value="true" />
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <nvpair id="Device_drive-meta_attributes-master-max" name="master-max" value="1" />
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <nvpair id="Device_drive-meta_attributes-master-node-max" name="master-node-max" value="1" />
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <nvpair id="Device_drive-meta_attributes-clone-node-max" name="clone-node-max" value="1" />
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <nvpair id="Device_drive-meta_attributes-clone-max" name="clone-max" value="2" />
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <nvpair id="Device_drive-meta_attributes-interleave" name="interleave" value="true" />
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <nvpair id="Device_drive-meta_attributes-resource-stickiness" name="resource-stickiness" value="0" />
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <nvpair id="Device_drive-meta_attributes-target-role" name="target-role" value="Started" />
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             </meta_attributes>
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <primitive class="ocf" id="p_Device_drive" provider="linbit" type="drbd" >
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <instance_attributes id="p_Device_drive-instance_attributes" >
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="p_Device_drive-instance_attributes-drbd_resource" name="drbd_resource" value="drive" />
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </instance_attributes>
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <operations >
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="p_Device_drive-start-0" interval="0" name="start" timeout="90" />
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="p_Device_drive-stop-0" interval="0" name="stop" timeout="100" />
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="p_Device_drive-promote-0" interval="0" name="promote" timeout="90" />
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="p_Device_drive-demote-0" interval="0" name="demote" timeout="90" />
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="p_Device_drive-monitor-10" interval="10" name="monitor" role="Master" timeout="20" />
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="p_Device_drive-monitor-20" interval="20" name="monitor" role="Slave" timeout="20" />
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </operations>
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <meta_attributes id="p_Device_drive-meta_attributes" >
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="p_Device_drive-meta_attributes-resource-stickiness" name="resource-stickiness" value="0" />
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </meta_attributes>
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             </primitive>
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           </master>
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <primitive class="ocf" id="LVM_drive" provider="nas" type="LVM2" __crm_diff_marker__="added:top" >
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <instance_attributes id="LVM_drive-instance_attributes" >
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <nvpair id="LVM_drive-instance_attributes-vg_name" name="vg_name" value="drive-CSD" />
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <nvpair id="LVM_drive-instance_attributes-activation_mode" name="activation_mode" value="auto" />
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             </instance_attributes>
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <meta_attributes id="LVM_drive-meta_attributes" >
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <nvpair id="LVM_drive-meta_attributes-resource-stickiness" name="resource-stickiness" value="0" />
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             </meta_attributes>
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           </primitive>
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         </resources>
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         <constraints >
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <rsc_location id="Device_drive_on_Connected_Node" rsc="Device_drive" __crm_diff_marker__="added:top" >
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <rule id="Device_drive_on_Connected_Node-rule" role="master" score-attribute="pingd" >
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <expression attribute="pingd" id="Device_drive_on_Connected_Node-expression" operation="defined" />
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             </rule>
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           </rsc_location>
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <rsc_location id="Device_drive_prefer_Node" rsc="Device_drive" __crm_diff_marker__="added:top" >
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <rule id="Device_drive_prefer_Node-rule" role="master" score="50" >
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <expression attribute="#uname" id="Device_drive_prefer_Node-expression" operation="eq" value="Cluster-Server-1" />
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             </rule>
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           </rsc_location>
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <rsc_order first="Device_drive" first-action="promote" id="LVM_drive_after_Device_drive" score="INFINITY" then="LVM_drive" then-action="start" __crm_diff_marker__="added:top" />
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <rsc_colocation id="LVM_drive_with_Device_drive" rsc="LVM_drive" score="INFINITY" with-rsc="Device_drive" with-rsc-role="Master" __crm_diff_marker__="added:top" />
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         </constraints>
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       </configuration>
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     </cib>
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   </diff-added>
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause </diff>
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_IDLE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes are eligible to run resources.
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 84: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib_replace_notify: Replaced: 0.6.38 -> 0.7.1 from Cluster-Server-1
Sep 10 15:22:06 Cluster-Server-2 attrd: [40195]: info: do_cib_replaced: Sending full refresh
Sep 10 15:22:06 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: do_cib_replaced: Updating the CIB after a replace: DC=true
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-1: true (overwrite=true) hash_size=2
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-2: true (overwrite=true) hash_size=2
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: update_dc: Unset DC. Was Cluster-Server-2
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: - <cib admin_epoch="0" epoch="6" num_updates="38" />
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: crm_uptime: Current CPU usage is: 0s, 100000us
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: + <cib epoch="7" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="Cluster-Server-1" update-client="cibadmin" cib-last-written="Mon Sep 10 15:21:43 2012" have-quorum="1" dc-uuid="Cluster-Server-2" >
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +   <configuration >
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: do_election_vote: Started election 6
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +     <resources >
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started Election Timeout (I_ELECTION_DC:120000ms), src=107
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <master id="Device_drive" __crm_diff_marker__="added:top" >
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +         <meta_attributes id="Device_drive-meta_attributes" >
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <nvpair id="Device_drive-meta_attributes-notify" name="notify" value="true" />
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <nvpair id="Device_drive-meta_attributes-master-max" name="master-max" value="1" />
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <nvpair id="Device_drive-meta_attributes-master-node-max" name="master-node-max" value="1" />
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <nvpair id="Device_drive-meta_attributes-clone-node-max" name="clone-node-max" value="1" />
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <nvpair id="Device_drive-meta_attributes-clone-max" name="clone-max" value="2" />
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <nvpair id="Device_drive-meta_attributes-interleave" name="interleave" value="true" />
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <nvpair id="Device_drive-meta_attributes-resource-stickiness" name="resource-stickiness" value="0" />
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <nvpair id="Device_drive-meta_attributes-target-role" name="target-role" value="Started" />
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +         </meta_attributes>
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +         <primitive class="ocf" id="p_Device_drive" provider="linbit" type="drbd" >
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <instance_attributes id="p_Device_drive-instance_attributes" >
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="p_Device_drive-instance_attributes-drbd_resource" name="drbd_resource" value="drive" />
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +           </instance_attributes>
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <operations >
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: do_election_count_vote: Created voted hash
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: crm_uptime: Current CPU usage is: 0s, 100000us
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: do_election_count_vote: Election 6 (current: 6, owner: Cluster-Server-2): Processed vote from Cluster-Server-2 (Recorded)
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <op id="p_Device_drive-start-0" interval="0" name="start" timeout="90" />
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <op id="p_Device_drive-stop-0" interval="0" name="stop" timeout="100" />
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <op id="p_Device_drive-promote-0" interval="0" name="promote" timeout="90" />
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <op id="p_Device_drive-demote-0" interval="0" name="demote" timeout="90" />
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <op id="p_Device_drive-monitor-10" interval="10" name="monitor" role="Master" timeout="20" />
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <op id="p_Device_drive-monitor-20" interval="20" name="monitor" role="Slave" timeout="20" />
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +           </operations>
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <meta_attributes id="p_Device_drive-meta_attributes" >
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="p_Device_drive-meta_attributes-resource-stickiness" name="resource-stickiness" value="0" />
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +           </meta_attributes>
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +         </primitive>
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +       </master>
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <primitive class="ocf" id="LVM_drive" provider="nas" type="LVM2" __crm_diff_marker__="added:top" >
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +         <instance_attributes id="LVM_drive-instance_attributes" >
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <nvpair id="LVM_drive-instance_attributes-vg_name" name="vg_name" value="drive-CSD" />
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <nvpair id="LVM_drive-instance_attributes-activation_mode" name="activation_mode" value="auto" />
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +         </instance_attributes>
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +         <meta_attributes id="LVM_drive-meta_attributes" >
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <nvpair id="LVM_drive-meta_attributes-resource-stickiness" name="resource-stickiness" value="0" />
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +         </meta_attributes>
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +       </primitive>
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +     </resources>
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +     <constraints >
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <rsc_location id="Device_drive_on_Connected_Node" rsc="Device_drive" __crm_diff_marker__="added:top" >
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +         <rule id="Device_drive_on_Connected_Node-rule" role="master" score-attribute="pingd" >
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <expression attribute="pingd" id="Device_drive_on_Connected_Node-expression" operation="defined" />
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +         </rule>
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +       </rsc_location>
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <rsc_location id="Device_drive_prefer_Node" rsc="Device_drive" __crm_diff_marker__="added:top" >
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +         <rule id="Device_drive_prefer_Node-rule" role="master" score="50" >
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <expression attribute="#uname" id="Device_drive_prefer_Node-expression" operation="eq" value="Cluster-Server-1" />
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +         </rule>
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +       </rsc_location>
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <rsc_order first="Device_drive" first-action="promote" id="LVM_drive_after_Device_drive" score="INFINITY" then="LVM_drive" then-action="start" __crm_diff_marker__="added:top" />
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <rsc_colocation id="LVM_drive_with_Device_drive" rsc="LVM_drive" score="INFINITY" with-rsc="Device_drive" with-rsc-role="Master" __crm_diff_marker__="added:top" />
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +     </constraints>
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +   </configuration>
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib:diff: + </cib>
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=Cluster-Server-1/cibadmin/2, version=0.7.1): ok (rc=0)
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: crm_uptime: Current CPU usage is: 0s, 100000us
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: crm_compare_age: Win: 100000 vs 0  (usec)
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: do_election_count_vote: Election 6 (current: 6, owner: Cluster-Server-2): Processed no-vote from Cluster-Server-1 (Recorded)
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: do_election_check: Destroying voted hash
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_ELECTION_DC: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_election_check ]
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_ELECTION_DC from do_election_check() received in state S_ELECTION
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_START
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: do_te_control: The transitioner is already active
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_START
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pengine
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_START
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started Integration Timer (I_INTEGRATED:180000ms), src=109
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TAKEOVER
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: info: do_dc_takeover: Taking over DC status for this partition
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/85, version=0.7.2): ok (rc=0)
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: debug: cib_process_readwrite: We are still in R/W mode
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/87, version=0.7.4): ok (rc=0)
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/88, version=0.7.5): ok (rc=0)
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:22:06 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-probe_complete" name="probe_complete" value="true" />
Sep 10 15:22:06 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: pingd (100)
Sep 10 15:22:06 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 25 for probe_complete=true passed
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:22:06 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-pingd" name="pingd" value="100" />
Sep 10 15:22:06 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 27 for pingd=100 passed
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//cluster_property_set//nvpair[@name='dc-version'] (/cib/configuration/crm_config/cluster_property_set/nvpair[1])
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: find_nvpair_attr_delegate: Match <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff" />
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/90, version=0.7.8): ok (rc=0)
Sep 10 15:22:06 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//cluster_property_set//nvpair[@name='cluster-infrastructure'] (/cib/configuration/crm_config/cluster_property_set/nvpair[2])
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: find_nvpair_attr_delegate: Match <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="cman" />
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_OFFER_ALL
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: initialize_join: join-4: Initializing join data (flag=true)
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: join_make_offer: join-4: Sending offer to Cluster-Server-1
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: join_make_offer: join-4: Sending offer to Cluster-Server-2
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: info: do_dc_join_offer_all: join-4: Waiting on 2 outstanding join acks
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke_callback: Discarding PE request in state: S_INTEGRATION
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: handle_request: Raising I_JOIN_OFFER: join-4
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_OFFER: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_REQUEST
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: info: update_dc: Set DC to Cluster-Server-2 (3.0.6)
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: do_cl_join_offer_respond: do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Sep 10 15:22:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:22:07 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:22:07 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-probe_complete" name="probe_complete" value="true" />
Sep 10 15:22:07 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:22:07 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-pingd" name="pingd" value="100" />
Sep 10 15:22:07 Cluster-Server-1 crmd: [48715]: debug: join_query_callback: Respond to join offer join-4
Sep 10 15:22:07 Cluster-Server-1 crmd: [48715]: debug: join_query_callback: Acknowledging Cluster-Server-2 as our DC
Sep 10 15:22:07 Cluster-Server-1 cib: [48709]: info: Managed write_cib_contents process 50931 exited with return code 0.
Sep 10 15:22:07 Cluster-Server-1 crmd: [48715]: debug: handle_request: Raising I_JOIN_RESULT: join-4
Sep 10 15:22:07 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_PENDING cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:22:07 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:22:07 Cluster-Server-1 crmd: [48715]: debug: do_cl_join_finalize_respond: Confirming join join-4: join_ack_nack
Sep 10 15:22:07 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_NFS_Server:0 after complete start op (interval=0)
Sep 10 15:22:07 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_NFS_Server:0 after complete monitor op (interval=30000)
Sep 10 15:22:07 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_PingD:0 after complete monitor op (interval=0)
Sep 10 15:22:07 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_PingD:0 after complete monitor op (interval=10000)
Sep 10 15:22:07 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_iSCSI_Daemon:0 after complete monitor op (interval=0)
Sep 10 15:22:07 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_iSCSI_Daemon:0 after complete monitor op (interval=30000)
Sep 10 15:22:07 Cluster-Server-1 crmd: [48715]: debug: do_cl_join_finalize_respond: join-4: Join complete.  Sending local LRM status to Cluster-Server-2
Sep 10 15:22:07 Cluster-Server-1 attrd: [48713]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
Sep 10 15:22:07 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Sep 10 15:22:07 Cluster-Server-1 crmd: [48715]: debug: attrd_update_delegate: Sent update: (null)=(null) for localhost
Sep 10 15:22:07 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_NOT_DC: [ state=S_PENDING cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
Sep 10 15:22:07 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:22:07 Cluster-Server-1 crmd: [48715]: debug: do_log: FSA: Input I_NOT_DC from do_cl_join_finalize_respond() received in state S_PENDING
Sep 10 15:22:07 Cluster-Server-1 crmd: [48715]: notice: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
Sep 10 15:22:07 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:22:07 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:22:07 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:22:07 Cluster-Server-1 cib: [48709]: debug: activateCibXml: Triggering CIB write for cib_replace op
Sep 10 15:22:07 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:22:07 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-probe_complete" name="probe_complete" value="true" />
Sep 10 15:22:07 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: pingd (100)
Sep 10 15:22:07 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 26 for probe_complete=true passed
Sep 10 15:22:07 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 28 for pingd=100 passed
Sep 10 15:22:07 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:22:07 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-pingd" name="pingd" value="100" />
Sep 10 15:22:07 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:22:07 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-probe_complete" name="probe_complete" value="true" />
Sep 10 15:22:07 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:22:07 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-pingd" name="pingd" value="100" />
Sep 10 15:22:07 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 30 for probe_complete=true passed
Sep 10 15:22:07 Cluster-Server-1 cib: [48709]: debug: Forking temp process write_cib_contents
Sep 10 15:22:07 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 32 for pingd=100 passed
Sep 10 15:22:07 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 34 for probe_complete=true passed
Sep 10 15:22:07 Cluster-Server-1 lrmd: [48712]: debug: on_msg_add_rsc:client [48715] adds resource p_Device_drive:0
Sep 10 15:22:07 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 36 for pingd=100 passed
Sep 10 15:22:07 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=10:5:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=p_Device_drive:0_monitor_0
Sep 10 15:22:07 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op:2399: copying parameters for rsc p_Device_drive:0
Sep 10 15:22:07 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation monitor[9] on p_Device_drive:0 for client 48715, its parameters: drbd_resource=[drive] CRM_meta_timeout=[20000] CRM_meta_clone_node_max=[1] CRM_meta_notify=[true] crm_feature_set=[3.0.6] CRM_meta_clone=[0] CRM_meta_clone_max=[2] CRM_meta_master_node_max=[1] CRM_meta_globally_unique=[false] CRM_meta_master_max=[1]  to the operation list.
Sep 10 15:22:07 Cluster-Server-1 lrmd: [48712]: info: rsc:p_Device_drive:0 probe[9] (pid 50936)
Sep 10 15:22:07 Cluster-Server-1 lrmd: [48712]: debug: on_msg_add_rsc:client [48715] adds resource LVM_drive
Sep 10 15:22:07 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=11:5:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=LVM_drive_monitor_0
Sep 10 15:22:07 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op:2399: copying parameters for rsc LVM_drive
Sep 10 15:22:07 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation monitor[10] on LVM_drive for client 48715, its parameters: crm_feature_set=[3.0.6] activation_mode=[auto] vg_name=[drive-CSD] CRM_meta_timeout=[20000]  to the operation list.
Sep 10 15:22:07 Cluster-Server-1 lrmd: [48712]: info: rsc:LVM_drive probe[10] (pid 50939)
Sep 10 15:22:07 Cluster-Server-1 cib: [48709]: info: Managed write_cib_contents process 50934 exited with return code 0.
drbd(p_Device_drive:0)[50936]:	2012/09/10_15:22:07 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:22:07 Cluster-Server-1 lrmd: [48712]: info: Managed LVM_drive:monitor process 50939 exited with return code 0.
Sep 10 15:22:07 Cluster-Server-1 lrmd: [48712]: info: operation monitor[10] on LVM_drive for client 48715: pid 50939 exited with return code 0
Sep 10 15:22:07 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce LVM_drive after complete monitor op (interval=0)
Sep 10 15:22:07 Cluster-Server-1 crm_attribute: [51000]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:22:07 Cluster-Server-1 crm_attribute: [51000]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:22:07 Cluster-Server-1 crm_attribute: [51000]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:22:07 Cluster-Server-1 crm_attribute: [51000]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:22:07 Cluster-Server-1 crm_attribute: [51000]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:22:07 Cluster-Server-1 crm_attribute: [51000]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:22:07 Cluster-Server-1 crm_attribute: [51000]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:22:07 Cluster-Server-1 crm_attribute: [51000]: info: determine_host: Mapped Cluster-Server-1 to Cluster-Server-1
Sep 10 15:22:07 Cluster-Server-1 crm_attribute: [51000]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:22:07 Cluster-Server-1 crm_attribute: [51000]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:22:07 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:0=10000
Sep 10 15:22:07 Cluster-Server-1 attrd: [48713]: info: find_hash_entry: Creating hash entry for master-p_Device_drive:0
Sep 10 15:22:07 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 10000, Current: (null), Stored: (null)
Sep 10 15:22:07 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: New value of master-p_Device_drive:0 is 10000
Sep 10 15:22:07 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_Device_drive:0 (10000)
Sep 10 15:22:07 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='master-p_Device_drive:0'] does not exist
Sep 10 15:22:07 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for /cib (/cib)
Sep 10 15:22:07 Cluster-Server-1 attrd: [48713]: notice: attrd_perform_update: Sent update 39: master-p_Device_drive:0=10000
Sep 10 15:22:07 Cluster-Server-1 crm_attribute: [51000]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:22:07 Cluster-Server-1 crm_attribute: [51000]: info: attrd_lazy_update: Updated master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:22:07 Cluster-Server-1 crm_attribute: [51000]: info: main: Update master-p_Device_drive:0=10000 sent via attrd
Sep 10 15:22:07 Cluster-Server-1 crm_attribute: [51000]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:22:07 Cluster-Server-1 crm_attribute: [51000]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:22:07 Cluster-Server-1 attrd: [48713]: info: find_hash_entry: Creating hash entry for master-p_Device_drive:1
Sep 10 15:22:07 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='master-p_Device_drive:1'] does not exist
Sep 10 15:22:07 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update -22 for master-p_Device_drive:1=(null) passed
drbd(p_Device_drive:0)[50936]:	2012/09/10_15:22:07 DEBUG: drive: Exit code 0
drbd(p_Device_drive:0)[50936]:	2012/09/10_15:22:07 DEBUG: drive: Command output: 
Sep 10 15:22:07 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 39 for master-p_Device_drive:0=10000 passed
Sep 10 15:22:07 Cluster-Server-1 lrmd: [48712]: WARN: Managed p_Device_drive:0:monitor process 50936 exited with return code 8.
Sep 10 15:22:07 Cluster-Server-1 lrmd: [48712]: info: operation monitor[9] on p_Device_drive:0 for client 48715: pid 50936 exited with return code 8
Sep 10 15:22:07 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_Device_drive:0:monitor:stdout) 

Sep 10 15:22:07 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation LVM_drive_monitor_0 (call=10, rc=0, cib-update=25, confirmed=true) ok
Sep 10 15:22:07 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending monitor op to history for 'LVM_drive'
Sep 10 15:22:07 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce p_Device_drive:0 after complete monitor op (interval=0)
Sep 10 15:22:07 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation p_Device_drive:0_monitor_0 (call=9, rc=8, cib-update=26, confirmed=true) master
Sep 10 15:22:07 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending monitor op to history for 'p_Device_drive:0'
Sep 10 15:22:07 Cluster-Server-1 crmd: [48715]: debug: attrd_update_delegate: Sent update: probe_complete=true for localhost
Sep 10 15:22:07 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crmd: probe_complete=true
Sep 10 15:22:07 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: true, Current: true, Stored: true
Sep 10 15:22:07 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=39:6:8:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=p_Device_drive:0_monitor_10000
Sep 10 15:22:07 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation monitor[11] on p_Device_drive:0 for client 48715, its parameters: drbd_resource=[drive] CRM_meta_role=[Master] CRM_meta_timeout=[20000] CRM_meta_name=[monitor] CRM_meta_clone_node_max=[1] CRM_meta_notify=[true] crm_feature_set=[3.0.6] CRM_meta_clone=[0] CRM_meta_clone_max=[2] CRM_meta_master_node_max=[1] CRM_meta_interval=[10000] CRM_meta_globally_unique=[false] CRM_meta_master_max=[1]  to the operation list.
Sep 10 15:22:07 Cluster-Server-1 lrmd: [48712]: info: rsc:p_Device_drive:0 monitor[11] (pid 51019)
drbd(p_Device_drive:0)[51019]:	2012/09/10_15:22:07 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:22:07 Cluster-Server-1 crm_attribute: [51049]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:22:07 Cluster-Server-1 crm_attribute: [51049]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:22:07 Cluster-Server-1 crm_attribute: [51049]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:22:07 Cluster-Server-1 crm_attribute: [51049]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:22:07 Cluster-Server-1 crm_attribute: [51049]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:22:07 Cluster-Server-1 crm_attribute: [51049]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:22:07 Cluster-Server-1 crm_attribute: [51049]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:22:07 Cluster-Server-1 crm_attribute: [51049]: info: determine_host: Mapped Cluster-Server-1 to Cluster-Server-1
Sep 10 15:22:07 Cluster-Server-1 crm_attribute: [51049]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:22:07 Cluster-Server-1 crm_attribute: [51049]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:22:07 Cluster-Server-1 crm_attribute: [51049]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:22:07 Cluster-Server-1 crm_attribute: [51049]: info: attrd_lazy_update: Updated master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:22:07 Cluster-Server-1 crm_attribute: [51049]: info: main: Update master-p_Device_drive:0=10000 sent via attrd
Sep 10 15:22:07 Cluster-Server-1 crm_attribute: [51049]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:22:07 Cluster-Server-1 crm_attribute: [51049]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:22:07 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:0=10000
Sep 10 15:22:07 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:0)[51019]:	2012/09/10_15:22:07 DEBUG: drive: Exit code 0
drbd(p_Device_drive:0)[51019]:	2012/09/10_15:22:07 DEBUG: drive: Command output: 
Sep 10 15:22:07 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_Device_drive:0:monitor:stdout) 

Sep 10 15:22:07 Cluster-Server-1 lrmd: [48712]: WARN: Managed p_Device_drive:0:monitor process 51019 exited with return code 8.
Sep 10 15:22:07 Cluster-Server-1 lrmd: [48712]: info: operation monitor[11] on p_Device_drive:0 for client 48715: pid 51019 exited with return code 8
Sep 10 15:22:07 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce p_Device_drive:0 after complete monitor op (interval=10000)
Sep 10 15:22:07 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation p_Device_drive:0_monitor_10000 (call=11, rc=8, cib-update=27, confirmed=false) master
Sep 10 15:22:07 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending monitor op to history for 'p_Device_drive:0'
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:22:07 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/92, version=0.7.9): ok (rc=0)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Call 93 : Parsing CIB options
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Shutdown escalation occurs after: 1200000ms
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Checking for expired actions every 900000ms
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: join_query_callback: Respond to join offer join-4
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: join_query_callback: Acknowledging Cluster-Server-2 as our DC
Sep 10 15:22:07 Cluster-Server-2 cib: [40192]: debug: Forking temp process write_cib_contents
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_REQ
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: Processing req from Cluster-Server-2
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: join-4: Welcoming node Cluster-Server-2 (ref join_request-crmd-1347283327-50)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: 1 nodes have been integrated into join-4
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: join-4: Still waiting on 1 outstanding offers
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_REQ
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: Processing req from Cluster-Server-1
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: join-4: Welcoming node Cluster-Server-1 (ref join_request-crmd-1347283327-13)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: 2 nodes have been integrated into join-4
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: check_join_state: join-4: Integration of 2 peers complete: do_dc_join_filter_offer
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_INTEGRATED: [ state=S_INTEGRATION cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes responded to the join offer.
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_START
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started Finalization Timer (I_ELECTION:1800000ms), src=113
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_FINALIZE
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_finalize: Finializing join-4 for 2 clients
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: info: do_dc_join_finalize: join-4: Syncing the CIB from Cluster-Server-2 to the rest of the cluster
Sep 10 15:22:07 Cluster-Server-2 cib: [40192]: debug: sync_our_cib: Syncing CIB to all peers
Sep 10 15:22:07 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/95, version=0.7.11): ok (rc=0)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by finalize_sync_callback in state: S_FINALIZE_JOIN
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: check_join_state: join-4: Still waiting on 2 integrated nodes
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: finalize_sync_callback: Notifying 2 clients of join-4 results
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: finalize_join_for: join-4: ACK'ing join request from Cluster-Server-1, state member
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: finalize_join_for: join-4: ACK'ing join request from Cluster-Server-2, state member
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: handle_request: Raising I_JOIN_RESULT: join-4
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_cl_join_finalize_respond: Confirming join join-4: join_ack_nack
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_NFS_Server:1 after complete start op (interval=0)
Sep 10 15:22:07 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/96, version=0.7.12): ok (rc=0)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_NFS_Server:1 after complete monitor op (interval=30000)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_PingD:1 after complete monitor op (interval=0)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_PingD:1 after complete monitor op (interval=10000)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_iSCSI_Daemon:1 after complete monitor op (interval=0)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_iSCSI_Daemon:1 after complete monitor op (interval=30000)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_cl_join_finalize_respond: join-4: Join complete.  Sending local LRM status to Cluster-Server-2
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_ack: Ignoring op=join_ack_nack message from Cluster-Server-2
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: info: do_dc_join_ack: join-4: Updating node state to member for Cluster-Server-1
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: info: erase_status_tag: Deleting xpath: //node_state[@uname='Cluster-Server-1']/lrm
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_ack: join-4: Registered callback for LRM update 99
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: info: do_dc_join_ack: join-4: Updating node state to member for Cluster-Server-2
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: info: erase_status_tag: Deleting xpath: //node_state[@uname='Cluster-Server-2']/lrm
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_ack: join-4: Registered callback for LRM update 101
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:22:07 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/97, version=0.7.13): ok (rc=0)
Sep 10 15:22:07 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-1']/lrm (/cib/status/node_state[1]/lrm)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:22:07 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-1']/lrm (origin=local/crmd/98, version=0.7.14): ok (rc=0)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: erase_xpath_callback: Deletion of "//node_state[@uname='Cluster-Server-1']/lrm": ok (rc=0)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: join_update_complete_callback: Join update 99 complete
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by join_update_complete_callback in state: S_FINALIZE_JOIN
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: check_join_state: join-4 complete: join_update_complete_callback
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_FINALIZED: [ state=S_FINALIZE_JOIN cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-1: true (overwrite=true) hash_size=2
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-2: true (overwrite=true) hash_size=2
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes are eligible to run resources.
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_FINAL
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_final: Ensuring DC, quorum and node attributes are up-to-date
Sep 10 15:22:07 Cluster-Server-2 cib: [40192]: info: Managed write_cib_contents process 44734 exited with return code 0.
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: attrd_update_delegate: Sent update: (null)=(null) for localhost
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: crm_update_quorum: Updating quorum status to true (call=104)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_CANCEL
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_te_invoke: Cancelling the transition: inactive
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: do_te_invoke:162 - Triggered transition abort (complete=1) : Peer Cancelled
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 105: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:22:07 Cluster-Server-2 attrd: [40195]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
Sep 10 15:22:07 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Sep 10 15:22:07 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-2']/lrm (/cib/status/node_state[2]/lrm)
Sep 10 15:22:07 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-2']/lrm (origin=local/crmd/100, version=0.7.16): ok (rc=0)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_delete): 0.7.15 -> 0.7.16 (S_POLICY_ENGINE)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: No match for deleted action //diff-added//cib//lrm_rsc_op[@id='p_NFS_Server:1_last_0'] (p_NFS_Server:1_last_0 on Cluster-Server-2)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:320 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=p_NFS_Server:1_last_0, magic=0:0;7:3:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66, cib=0.7.16) : Resource op removal
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: erase_xpath_callback: Deletion of "//node_state[@uname='Cluster-Server-2']/lrm": ok (rc=0)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 106: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.7.16 -> 0.7.17 (S_POLICY_ENGINE)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Detected LRM refresh - 3 resources updated: Skipping all resource events
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:276 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.7.17) : LRM Refresh
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause <diff crm_feature_set="3.0.6" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   <diff-removed admin_epoch="0" epoch="7" num_updates="16" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     <cib num_updates="16" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   </diff-removed>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   <diff-added >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     <cib epoch="7" num_updates="17" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="Cluster-Server-1" update-client="cibadmin" cib-last-written="Mon Sep 10 15:22:06 2012" have-quorum="1" dc-uuid="Cluster-Server-2" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       <status >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         <node_state id="Cluster-Server-2" uname="Cluster-Server-2" ha="active" in_ccm="true" crmd="online" join="member" expected="member" crm-debug-origin="do_lrm_query" shutdown="0" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <lrm id="Cluster-Server-2" __crm_diff_marker__="added:top" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <lrm_resources >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <lrm_resource id="p_NFS_Server:1" type="nfs-kernel-server" class="lsb" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <lrm_rsc_op id="p_NFS_Server:1_last_0" operation_key="p_NFS_Server:1_start_0" operation="start" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.6" transition-key="7:3:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" transition-magic="0:0;7:3:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" call-id="7" rc-code="0" op-status="0" interval="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <lrm_rsc_op id="p_NFS_Server:1_monitor_30000" operation_key="p_NFS_Server:1_monitor_30000" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.6" transition-key="8:3:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" transition-magic="0:0;8:3:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" call-id="8" rc-code="0" op-status="0" interval="30000" op-digest="4811cef7f7f94e3a35a70be7916cb2fd" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </lrm_resource>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <lrm_resource id="p_PingD:1" type="ping" class="ocf" provider="pacemaker" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <lrm_rsc_op id="p_PingD:1_last_failure_0" operation_key="p_PingD:1_monitor_0" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.6" transition-key="10:2:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" transition-magic="0:0;10:2:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" call-id="4" rc-code="0" op-status="0" interval="0" op-digest="e746ac7936e48a80d701184bf3591d18" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <lrm_rsc_op id="p_PingD:1_monitor_10000" operation_key="p_PingD:1_monitor_10000" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.6" transition-key="28:3:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" transition-magic="0:0;28:3:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" call-id="6" rc-code="0" op-status="0" interval="10000" op-digest="4cbd9d437c5ab81b1238d21071f3920b" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </lrm_resource>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <lrm_resource id="p_iSCSI_Daemon:1" type="iscsi-scst" class="lsb" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <lrm_rsc_op id="p_iSCSI_Daemon:1_last_failure_0" operation_key="p_iSCSI_Daemon:1_monitor_0" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.6" transition-key="9:2:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" transition-magic="0:0;9:2:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" call-id="3" rc-code="0" op-status="0" interval="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <lrm_rsc_op id="p_iSCSI_Daemon:1_monitor_30000" operation_key="p_iSCSI_Daemon:1_monitor_30000" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.6" transition-key="18:3:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" transition-magic="0:0;18:3:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" call-id="5" rc-code="0" op-status="0" interval="30000" op-digest="4811cef7f7f94e3a35a70be7916cb2fd" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </lrm_resource>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             </lrm_resources>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           </lrm>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         </node_state>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       </status>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     </cib>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   </diff-added>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause </diff>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: join_update_complete_callback: Join update 101 complete
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by join_update_complete_callback in state: S_POLICY_ENGINE
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 107: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:22:07 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/102, version=0.7.18): ok (rc=0)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.7.17 -> 0.7.18 (S_POLICY_ENGINE)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.7.18 -> 0.7.19 (S_POLICY_ENGINE)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.7.19 -> 0.7.20 (S_POLICY_ENGINE)
Sep 10 15:22:07 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/104, version=0.7.20): ok (rc=0)
Sep 10 15:22:07 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:22:07 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-probe_complete" name="probe_complete" value="true" />
Sep 10 15:22:07 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: pingd (100)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.7.20 -> 0.7.21 (S_POLICY_ENGINE)
Sep 10 15:22:07 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:22:07 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-probe_complete" name="probe_complete" value="true" />
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: unpack_config: Startup probes: enabled
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH timeout: 60000
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH of failed nodes is disabled
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Stop all active resources: false
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Default stickiness: 0
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: notice: unpack_config: On loss of CCM Quorum: Ignore
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: unpack_domains: Unpacking domains
Sep 10 15:22:07 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-1 is online
Sep 10 15:22:07 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-pingd" name="pingd" value="100" />
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-2 is online
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: NFS_Server [p_NFS_Server]
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: iSCSI_Daemon [p_iSCSI_Daemon]
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: PingD [p_PingD]
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: clone_print:  Master/Slave Set: Device_drive [p_Device_drive]
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke_callback: Invoking the PE: query=107, ref=pe_calc-dc-1347283327-54, seq=312, quorate=1
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: short_print:      Stopped: [ p_Device_drive:0 p_Device_drive:1 ]
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: native_print: LVM_drive	(ocf::nas:LVM2):	Stopped 
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.7.21 -> 0.7.22 (S_POLICY_ENGINE)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:0: preferring current location (node=Cluster-Server-1, weight=1)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:1: preferring current location (node=Cluster-Server-2, weight=1)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.7.22 -> 0.7.23 (S_POLICY_ENGINE)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_NFS_Server:0
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_NFS_Server:1
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 NFS_Server instances of a possible 2
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_iSCSI_Daemon:0
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_iSCSI_Daemon:1
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 iSCSI_Daemon instances of a possible 2
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_PingD:0
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_PingD:1
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 PingD instances of a possible 2
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_Device_drive:0
Sep 10 15:22:07 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:22:07 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-pingd" name="pingd" value="100" />
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_Device_drive:1
Sep 10 15:22:07 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 29 for probe_complete=true passed
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 Device_drive instances of a possible 2
Sep 10 15:22:07 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 31 for probe_complete=true passed
Sep 10 15:22:07 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 33 for pingd=100 passed
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.7.23 -> 0.7.24 (S_POLICY_ENGINE)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:0 master score: 149
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: master_color: Promoting p_Device_drive:0 (Stopped Cluster-Server-1)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:1 master score: 99
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: master_color: Device_drive: Promoted 1 instances of a possible 1 to master
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to LVM_drive
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_create_probe: Probing p_Device_drive:0 on Cluster-Server-1 (Stopped)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_create_probe: Probing LVM_drive on Cluster-Server-1 (Stopped)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_create_probe: Probing p_Device_drive:1 on Cluster-Server-2 (Stopped)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_create_probe: Probing LVM_drive on Cluster-Server-2 (Stopped)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: master_create_actions: Creating actions for Device_drive
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: RecurringOp:  Start recurring monitor (10s) for p_Device_drive:0 on Cluster-Server-1
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: RecurringOp:  Start recurring monitor (20s) for p_Device_drive:1 on Cluster-Server-2
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: RecurringOp:  Start recurring monitor (10s) for p_Device_drive:0 on Cluster-Server-1
Sep 10 15:22:07 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 35 for pingd=100 passed
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: RecurringOp:  Start recurring monitor (20s) for p_Device_drive:1 on Cluster-Server-2
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.7.24 -> 0.7.25 (S_POLICY_ENGINE)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:0	(Started Cluster-Server-1)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:1	(Started Cluster-Server-2)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:0	(Started Cluster-Server-1)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:1	(Started Cluster-Server-2)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:0	(Started Cluster-Server-1)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:1	(Started Cluster-Server-2)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: notice: LogActions: Start   p_Device_drive:0	(Cluster-Server-1)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: notice: LogActions: Promote p_Device_drive:0	(Stopped -> Master Cluster-Server-1)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: notice: LogActions: Start   p_Device_drive:1	(Cluster-Server-2)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: notice: LogActions: Start   LVM_drive	(Cluster-Server-1)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_PE_SUCCESS from handle_response() received in state S_POLICY_ENGINE
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log <create_reply_adv origin="process_pe_message" t="crmd" version="3.0.6" subt="response" reference="pe_calc-dc-1347283327-54" crm_task="pe_calc" crm_sys_to="dc" crm_sys_from="pengine" crm-tgraph-in="/var/lib/pengine/pe-input-5.bz2" graph-errors="0" graph-warnings="0" config-errors="0" config-warnings="0" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   <crm_xml >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log     <transition_graph cluster-delay="60s" stonith-timeout="60s" failed-stop-offset="INFINITY" failed-start-offset="INFINITY" batch-limit="30" transition_id="5" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="0" priority="1000000" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="77" operation="notify" operation_key="p_Device_drive:0_post_notify_promote_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="p_Device_drive:0" long-id="Device_drive:p_Device_drive:0" class="ocf" provider="linbit" type="drbd" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone="0" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_master_max="1" CRM_meta_master_node_max="1" CRM_meta_notify="true" CRM_meta_notify_active_resource=" " CRM_meta_notify_active_uname=" " CRM_meta_notify_demote_resource=" " CRM_meta_notify_demote_uname=" " CRM_meta_notify_inactive_resource="p_Device_drive:0 p_Device_drive:1 " CRM_meta_notify_master_resource=" " CRM_meta_notify_master_uname=" " CRM_meta_notify_operation="promote" CRM_meta_notify_promote_resource="p_Device_drive:0 " CRM_meta_notify_promote_uname="Cluster-Server-1 " CRM_meta_notify_slave_resource=" " CRM_meta_notify_slave_uname=" " CRM_meta_notify_start_resource="p_Device_drive:0 p_Device_drive:1 " CRM_meta_notify_start_uname="Cluster-Server-1 Cluster-Server-2 " CRM_meta_notify_stop_resource=" " CRM_meta_notify_stop_uname=" " CRM_meta_notify_type="post" CRM_meta_timeout="20000" crm_feature_set="3.0.6" drbd_resource="drive" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="60" operation="notify" operation_key="Device_drive_post_notify_promoted_0" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="1" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="76" operation="notify" operation_key="p_Device_drive:0_pre_notify_promote_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="p_Device_drive:0" long-id="Device_drive:p_Device_drive:0" class="ocf" provider="linbit" type="drbd" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone="0" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_master_max="1" CRM_meta_master_node_max="1" CRM_meta_notify="true" CRM_meta_notify_active_resource=" " CRM_meta_notify_active_uname=" " CRM_meta_notify_demote_resource=" " CRM_meta_notify_demote_uname=" " CRM_meta_notify_inactive_resource="p_Device_drive:0 p_Device_drive:1 " CRM_meta_notify_master_resource=" " CRM_meta_notify_master_uname=" " CRM_meta_notify_operation="promote" CRM_meta_notify_promote_resource="p_Device_drive:0 " CRM_meta_notify_promote_uname="Cluster-Server-1 " CRM_meta_notify_slave_resource=" " CRM_meta_notify_slave_uname=" " CRM_meta_notify_start_resource="p_Device_drive:0 p_Device_drive:1 " CRM_meta_notify_start_uname="Cluster-Server-1 Cluster-Server-2 " CRM_meta_notify_stop_resource=" " CRM_meta_notify_stop_uname=" " CRM_meta_notify_type="pre" CRM_meta_timeout="20000" crm_feature_set="3.0.6" drbd_resource="drive" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="58" operation="notify" operation_key="Device_drive_pre_notify_promote_0" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="2" priority="1000000" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="74" operation="notify" operation_key="p_Device_drive:0_post_notify_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="p_Device_drive:0" long-id="Device_drive:p_Device_drive:0" class="ocf" provider="linbit" type="drbd" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone="0" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_master_max="1" CRM_meta_master_node_max="1" CRM_meta_notify="true" CRM_meta_notify_active_resource=" " CRM_meta_notify_active_uname=" " CRM_meta_notify_demote_resource=" " CRM_meta_notify_demote_uname=" " CRM_meta_notify_inactive_resource="p_Device_drive:0 p_Device_drive:1 " CRM_meta_notify_master_resource=" " CRM_meta_notify_master_uname=" " CRM_meta_notify_operation="start" CRM_meta_notify_promote_resource="p_Device_drive:0 " CRM_meta_notify_promote_uname="Cluster-Server-1 " CRM_meta_notify_slave_resource=" " CRM_meta_notify_slave_uname=" " CRM_meta_notify_start_resource="p_Device_drive:0 p_Device_drive:1 " CRM_meta_notify_start_uname="Cluster-Server-1 Cluster-Server-2 " CRM_meta_notify_stop_resource=" " CRM_meta_notify_stop_uname=" " CRM_meta_notify_type="post" CRM_meta_timeout="20000" crm_feature_set="3.0.6" drbd_resource="drive" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="48" operation="notify" operation_key="Device_drive_post_notify_running_0" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="3" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="41" operation="monitor" operation_key="p_Device_drive:0_monitor_10000" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="p_Device_drive:0" long-id="Device_drive:p_Device_drive:0" class="ocf" provider="linbit" type="drbd" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone="0" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_interval="10000" CRM_meta_master_max="1" CRM_meta_master_node_max="1" CRM_meta_name="monitor" CRM_meta_notify="true" CRM_meta_op_target_rc="8" CRM_meta_role="Master" CRM_meta_timeout="20000" crm_feature_set="3.0.6" drbd_resource="drive" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="39" operation="start" operation_key="p_Device_drive:0_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="40" operation="promote" operation_key="p_Device_drive:0_promote_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="49" operation="notified" operation_key="Device_drive_confirmed-post_notify_running_0" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="61" operation="notified" operation_key="Device_drive_confirmed-post_notify_promoted_0" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="4" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="40" operation="promote" operation_key="p_Device_drive:0_promote_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="p_Device_drive:0" long-id="Device_drive:p_Device_drive:0" class="ocf" provider="linbit" type="drbd" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone="0" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_master_max="1" CRM_meta_master_node_max="1" CRM_meta_name="promote" CRM_meta_notify="true" CRM_meta_notify_active_resource=" " CRM_meta_notify_active_uname=" " CRM_meta_notify_demote_resource=" " CRM_meta_notify_demote_uname=" " CRM_meta_notify_inactive_resource="p_Device_drive:0 p_Device_drive:1 " CRM_meta_notify_master_resource=" " CRM_meta_notify_master_uname=" " CRM_meta_notify_promote_resource="p_Device_drive:0 " CRM_meta_notify_promote_uname="Cluster-Server-1 " CRM_meta_notify_slave_resource=" " CRM_meta_notify_slave_uname=" " CRM_meta_notify_start_resource="p_Device_drive:0 p_Device_drive:1 " CRM_meta_notify_start_uname="Cluster-Server-1 Cluster-Server-2 " CRM_meta_notify_stop_resource=" " CRM_meta_notify_stop_uname=" " CRM_meta_timeout="90000" crm_feature_set="3.0.6" drbd_resource="drive" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="39" operation="start" operation_key="p_Device_drive:0_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="56" operation="promote" operation_key="Device_drive_promote_0" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="5" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="39" operation="start" operation_key="p_Device_drive:0_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="p_Device_drive:0" long-id="Device_drive:p_Device_drive:0" class="ocf" provider="linbit" type="drbd" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone="0" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_master_max="1" CRM_meta_master_node_max="1" CRM_meta_name="start" CRM_meta_notify="true" CRM_meta_notify_active_resource=" " CRM_meta_notify_active_uname=" " CRM_meta_notify_demote_resource=" " CRM_meta_notify_demote_uname=" " CRM_meta_notify_inactive_resource="p_Device_drive:0 p_Device_drive:1 " CRM_meta_notify_master_resource=" " CRM_meta_notify_master_uname=" " CRM_meta_notify_promote_resource="p_Device_drive:0 " CRM_meta_notify_promote_uname="Cluster-Server-1 " CRM_meta_notify_slave_resource=" " CRM_meta_notify_slave_uname=" " CRM_meta_notify_start_resource="p_Device_drive:0 p_Device_drive:1 " CRM_meta_notify_start_uname="Cluster-Server-1 Cluster-Server-2 " CRM_meta_notify_stop_resource=" " CRM_meta_notify_stop_uname=" " CRM_meta_timeout="90000" crm_feature_set="3.0.6" drbd_resource="drive" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="8" operation="probe_complete" operation_key="probe_complete" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="44" operation="start" operation_key="Device_drive_start_0" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="6" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="10" operation="monitor" operation_key="p_Device_drive:0_monitor_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="p_Device_drive:0" long-id="Device_drive:p_Device_drive:0" class="ocf" provider="linbit" type="drbd" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone="0" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_master_max="1" CRM_meta_master_node_max="1" CRM_meta_notify="true" CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" crm_feature_set="3.0.6" drbd_resource="drive" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="7" priority="1000000" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="79" operation="notify" operation_key="p_Device_drive:1_post_notify_promote_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="p_Device_drive:1" long-id="Device_drive:p_Device_drive:1" class="ocf" provider="linbit" type="drbd" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone="1" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_master_max="1" CRM_meta_master_node_max="1" CRM_meta_notify="true" CRM_meta_notify_active_resource=" " CRM_meta_notify_active_uname=" " CRM_meta_notify_demote_resource=" " CRM_meta_notify_demote_uname=" " CRM_meta_notify_inactive_resource="p_Device_drive:0 p_Device_drive:1 " CRM_meta_notify_master_resource=" " CRM_meta_notify_master_uname=" " CRM_meta_notify_operation="promote" CRM_meta_notify_promote_resource="p_Device_drive:0 " CRM_meta_notify_promote_uname="Cluster-Server-1 " CRM_meta_notify_slave_resource=" " CRM_meta_notify_slave_uname=" " CRM_meta_notify_start_resource="p_Device_drive:0 p_Device_drive:1 " CRM_meta_notify_start_uname="Cluster-Server-1 Cluster-Server-2 " CRM_meta_notify_stop_resource=" " CRM_meta_notify_stop_uname=" " CRM_meta_notify_type="post" CRM_meta_timeout="20000" crm_feature_set="3.0.6" drbd_resource="drive" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="60" operation="notify" operation_key="Device_drive_post_notify_promoted_0" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="8" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="78" operation="notify" operation_key="p_Device_drive:1_pre_notify_promote_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="p_Device_drive:1" long-id="Device_drive:p_Device_drive:1" class="ocf" provider="linbit" type="drbd" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone="1" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_master_max="1" CRM_meta_master_node_max="1" CRM_meta_notify="true" CRM_meta_notify_active_resource=" " CRM_meta_notify_active_uname=" " CRM_meta_notify_demote_resource=" " CRM_meta_notify_demote_uname=" " CRM_meta_notify_inactive_resource="p_Device_drive:0 p_Device_drive:1 " CRM_meta_notify_master_resource=" " CRM_meta_notify_master_uname=" " CRM_meta_notify_operation="promote" CRM_meta_notify_promote_resource="p_Device_drive:0 " CRM_meta_notify_promote_uname="Cluster-Server-1 " CRM_meta_notify_slave_resource=" " CRM_meta_notify_slave_uname=" " CRM_meta_notify_start_resource="p_Device_drive:0 p_Device_drive:1 " CRM_meta_notify_start_uname="Cluster-Server-1 Cluster-Server-2 " CRM_meta_notify_stop_resource=" " CRM_meta_notify_stop_uname=" " CRM_meta_notify_type="pre" CRM_meta_timeout="20000" crm_feature_set="3.0.6" drbd_resource="drive" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="58" operation="notify" operation_key="Device_drive_pre_notify_promote_0" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="9" priority="1000000" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="75" operation="notify" operation_key="p_Device_drive:1_post_notify_start_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="p_Device_drive:1" long-id="Device_drive:p_Device_drive:1" class="ocf" provider="linbit" type="drbd" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone="1" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_master_max="1" CRM_meta_master_node_max="1" CRM_meta_notify="true" CRM_meta_notify_active_resource=" " CRM_meta_notify_active_uname=" " CRM_meta_notify_demote_resource=" " CRM_meta_notify_demote_uname=" " CRM_meta_notify_inactive_resource="p_Device_drive:0 p_Device_drive:1 " CRM_meta_notify_master_resource=" " CRM_meta_notify_master_uname=" " CRM_meta_notify_operation="start" CRM_meta_notify_promote_resource="p_Device_drive:0 " CRM_meta_notify_promote_uname="Cluster-Server-1 " CRM_meta_notify_slave_resource=" " CRM_meta_notify_slave_uname=" " CRM_meta_notify_start_resource="p_Device_drive:0 p_Device_drive:1 " CRM_meta_notify_start_uname="Cluster-Server-1 Cluster-Server-2 " CRM_meta_notify_stop_resource=" " CRM_meta_notify_stop_uname=" " CRM_meta_notify_type="post" CRM_meta_timeout="20000" crm_feature_set="3.0.6" drbd_resource="drive" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="48" operation="notify" operation_key="Device_drive_post_notify_running_0" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="10" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="43" operation="monitor" operation_key="p_Device_drive:1_monitor_20000" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="p_Device_drive:1" long-id="Device_drive:p_Device_drive:1" class="ocf" provider="linbit" type="drbd" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone="1" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_interval="20000" CRM_meta_master_max="1" CRM_meta_master_node_max="1" CRM_meta_name="monitor" CRM_meta_notify="true" CRM_meta_role="Slave" CRM_meta_timeout="20000" crm_feature_set="3.0.6" drbd_resource="drive" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="42" operation="start" operation_key="p_Device_drive:1_start_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="49" operation="notified" operation_key="Device_drive_confirmed-post_notify_running_0" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="61" operation="notified" operation_key="Device_drive_confirmed-post_notify_promoted_0" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="11" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="42" operation="start" operation_key="p_Device_drive:1_start_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="p_Device_drive:1" long-id="Device_drive:p_Device_drive:1" class="ocf" provider="linbit" type="drbd" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone="1" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_master_max="1" CRM_meta_master_node_max="1" CRM_meta_name="start" CRM_meta_notify="true" CRM_meta_notify_active_resource=" " CRM_meta_notify_active_uname=" " CRM_meta_notify_demote_resource=" " CRM_meta_notify_demote_uname=" " CRM_meta_notify_inactive_resource="p_Device_drive:0 p_Device_drive:1 " CRM_meta_notify_master_resource=" " CRM_meta_notify_master_uname=" " CRM_meta_notify_promote_resource="p_Device_drive:0 " CRM_meta_notify_promote_uname="Cluster-Server-1 " CRM_meta_notify_slave_resource=" " CRM_meta_notify_slave_uname=" " CRM_meta_notify_start_resource="p_Device_drive:0 p_Device_drive:1 " CRM_meta_notify_start_uname="Cluster-Server-1 Cluster-Server-2 " CRM_meta_notify_stop_resource=" " CRM_meta_notify_stop_uname=" " CRM_meta_timeout="90000" crm_feature_set="3.0.6" drbd_resource="drive" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: notice: process_pe_message: Transition 5: PEngine Input stored in: /var/lib/pengine/pe-input-5.bz2
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="8" operation="probe_complete" operation_key="probe_complete" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="44" operation="start" operation_key="Device_drive_start_0" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="12" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="13" operation="monitor" operation_key="p_Device_drive:1_monitor_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="p_Device_drive:1" long-id="Device_drive:p_Device_drive:1" class="ocf" provider="linbit" type="drbd" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone="1" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_master_max="1" CRM_meta_master_node_max="1" CRM_meta_notify="true" CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" crm_feature_set="3.0.6" drbd_resource="drive" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="13" priority="1000000" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="61" operation="notified" operation_key="Device_drive_confirmed-post_notify_promoted_0" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_master_max="1" CRM_meta_master_node_max="1" CRM_meta_notify="true" CRM_meta_notify_operation="promote" CRM_meta_notify_type="pre" CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="60" operation="notify" operation_key="Device_drive_post_notify_promoted_0" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="77" operation="notify" operation_key="p_Device_drive:0_post_notify_promote_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="79" operation="notify" operation_key="p_Device_drive:1_post_notify_promote_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="14" priority="1000000" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="60" operation="notify" operation_key="Device_drive_post_notify_promoted_0" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_master_max="1" CRM_meta_master_node_max="1" CRM_meta_notify="true" CRM_meta_notify_operation="promote" CRM_meta_notify_type="post" CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="57" operation="promoted" operation_key="Device_drive_promoted_0" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="59" operation="notified" operation_key="Device_drive_confirmed-pre_notify_promote_0" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="15" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="59" operation="notified" operation_key="Device_drive_confirmed-pre_notify_promote_0" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_master_max="1" CRM_meta_master_node_max="1" CRM_meta_notify="true" CRM_meta_notify_operation="promote" CRM_meta_notify_type="pre" CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="58" operation="notify" operation_key="Device_drive_pre_notify_promote_0" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="76" operation="notify" operation_key="p_Device_drive:0_pre_notify_promote_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="78" operation="notify" operation_key="p_Device_drive:1_pre_notify_promote_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="16" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="58" operation="notify" operation_key="Device_drive_pre_notify_promote_0" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_master_max="1" CRM_meta_master_node_max="1" CRM_meta_notify="true" CRM_meta_notify_operation="promote" CRM_meta_notify_type="pre" CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="49" operation="notified" operation_key="Device_drive_confirmed-post_notify_running_0" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="17" priority="1000000" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="57" operation="promoted" operation_key="Device_drive_promoted_0" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_master_max="1" CRM_meta_master_node_max="1" CRM_meta_notify="true" CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="40" operation="promote" operation_key="p_Device_drive:0_promote_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="18" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="56" operation="promote" operation_key="Device_drive_promote_0" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_master_max="1" CRM_meta_master_node_max="1" CRM_meta_notify="true" CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="45" operation="running" operation_key="Device_drive_running_0" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="59" operation="notified" operation_key="Device_drive_confirmed-pre_notify_promote_0" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="19" priority="1000000" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="49" operation="notified" operation_key="Device_drive_confirmed-post_notify_running_0" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_master_max="1" CRM_meta_master_node_max="1" CRM_meta_notify="true" CRM_meta_notify_operation="start" CRM_meta_notify_type="pre" CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="48" operation="notify" operation_key="Device_drive_post_notify_running_0" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="74" operation="notify" operation_key="p_Device_drive:0_post_notify_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="75" operation="notify" operation_key="p_Device_drive:1_post_notify_start_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="20" priority="1000000" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="48" operation="notify" operation_key="Device_drive_post_notify_running_0" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_master_max="1" CRM_meta_master_node_max="1" CRM_meta_notify="true" CRM_meta_notify_operation="start" CRM_meta_notify_type="post" CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="45" operation="running" operation_key="Device_drive_running_0" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="47" operation="notified" operation_key="Device_drive_confirmed-pre_notify_start_0" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="21" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="47" operation="notified" operation_key="Device_drive_confirmed-pre_notify_start_0" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_master_max="1" CRM_meta_master_node_max="1" CRM_meta_notify="true" CRM_meta_notify_operation="start" CRM_meta_notify_type="pre" CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="46" operation="notify" operation_key="Device_drive_pre_notify_start_0" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="22" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="46" operation="notify" operation_key="Device_drive_pre_notify_start_0" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_master_max="1" CRM_meta_master_node_max="1" CRM_meta_notify="true" CRM_meta_notify_operation="start" CRM_meta_notify_type="pre" CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="23" priority="1000000" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="45" operation="running" operation_key="Device_drive_running_0" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_master_max="1" CRM_meta_master_node_max="1" CRM_meta_notify="true" CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="39" operation="start" operation_key="p_Device_drive:0_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="42" operation="start" operation_key="p_Device_drive:1_start_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="44" operation="start" operation_key="Device_drive_start_0" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="24" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="44" operation="start" operation_key="Device_drive_start_0" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_master_max="1" CRM_meta_master_node_max="1" CRM_meta_notify="true" CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="47" operation="notified" operation_key="Device_drive_confirmed-pre_notify_start_0" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="25" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="68" operation="start" operation_key="LVM_drive_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="LVM_drive" long-id="LVM_drive" class="ocf" provider="nas" type="LVM2" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_timeout="20000" activation_mode="auto" crm_feature_set="3.0.6" vg_name="drive-CSD" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="8" operation="probe_complete" operation_key="probe_complete" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="61" operation="notified" operation_key="Device_drive_confirmed-post_notify_promoted_0" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="26" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="14" operation="monitor" operation_key="LVM_drive_monitor_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="LVM_drive" long-id="LVM_drive" class="ocf" provider="nas" type="LVM2" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" activation_mode="auto" crm_feature_set="3.0.6" vg_name="drive-CSD" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="27" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="11" operation="monitor" operation_key="LVM_drive_monitor_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="LVM_drive" long-id="LVM_drive" class="ocf" provider="nas" type="LVM2" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" activation_mode="auto" crm_feature_set="3.0.6" vg_name="drive-CSD" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="28" priority="1000000" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="12" operation="probe_complete" operation_key="probe_complete" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_no_wait="true" crm_feature_set="3.0.6" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="13" operation="monitor" operation_key="p_Device_drive:1_monitor_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="14" operation="monitor" operation_key="LVM_drive_monitor_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="29" priority="1000000" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="9" operation="probe_complete" operation_key="probe_complete" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_no_wait="true" crm_feature_set="3.0.6" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="10" operation="monitor" operation_key="p_Device_drive:0_monitor_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="11" operation="monitor" operation_key="LVM_drive_monitor_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="30" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="8" operation="probe_complete" operation_key="probe_complete" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes crm_feature_set="3.0.6" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="9" operation="probe_complete" operation_key="probe_complete" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="12" operation="probe_complete" operation_key="probe_complete" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log     </transition_graph>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   </crm_xml>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log </create_reply_adv>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_INVOKE
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: unpack_graph: Unpacked transition 5: 31 actions in 31 synapses
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: info: do_te_invoke: Processing graph 5 (ref=pe_calc-dc-1347283327-54) derived from /var/lib/pengine/pe-input-5.bz2
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.7.25 -> 0.7.26 (S_TRANSITION_ENGINE)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.7.26 -> 0.7.27 (S_TRANSITION_ENGINE)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.7.27 -> 0.7.28 (S_TRANSITION_ENGINE)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 10: monitor p_Device_drive:0_monitor_0 on Cluster-Server-1
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 13: monitor p_Device_drive:1_monitor_0 on Cluster-Server-2 (local)
Sep 10 15:22:07 Cluster-Server-2 lrmd: [40194]: debug: on_msg_add_rsc:client [40197] adds resource p_Device_drive:1
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_lrm_rsc_op: Performing key=13:5:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=p_Device_drive:1_monitor_0
Sep 10 15:22:07 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op:2399: copying parameters for rsc p_Device_drive:1
Sep 10 15:22:07 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op: add an operation operation monitor[9] on p_Device_drive:1 for client 40197, its parameters: drbd_resource=[drive] CRM_meta_timeout=[20000] CRM_meta_clone_node_max=[1] CRM_meta_notify=[true] crm_feature_set=[3.0.6] CRM_meta_clone=[1] CRM_meta_clone_max=[2] CRM_meta_master_node_max=[1] CRM_meta_globally_unique=[false] CRM_meta_master_max=[1]  to the operation list.
Sep 10 15:22:07 Cluster-Server-2 lrmd: [40194]: info: rsc:p_Device_drive:1 probe[9] (pid 44735)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: te_pseudo_action: Pseudo action 46 fired and confirmed
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 14: monitor LVM_drive_monitor_0 on Cluster-Server-2 (local)
Sep 10 15:22:07 Cluster-Server-2 lrmd: [40194]: debug: on_msg_add_rsc:client [40197] adds resource LVM_drive
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_lrm_rsc_op: Performing key=14:5:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=LVM_drive_monitor_0
Sep 10 15:22:07 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op:2399: copying parameters for rsc LVM_drive
Sep 10 15:22:07 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op: add an operation operation monitor[10] on LVM_drive for client 40197, its parameters: crm_feature_set=[3.0.6] activation_mode=[auto] vg_name=[drive-CSD] CRM_meta_timeout=[20000]  to the operation list.
Sep 10 15:22:07 Cluster-Server-2 lrmd: [40194]: info: rsc:LVM_drive probe[10] (pid 44736)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 11: monitor LVM_drive_monitor_0 on Cluster-Server-1
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 5 (Complete=0, Pending=4, Fired=5, Skipped=0, Incomplete=26, Source=/var/lib/pengine/pe-input-5.bz2): In-progress
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: te_pseudo_action: Pseudo action 47 fired and confirmed
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: te_pseudo_action: Pseudo action 44 fired and confirmed
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 5 (Complete=1, Pending=4, Fired=2, Skipped=0, Incomplete=24, Source=/var/lib/pengine/pe-input-5.bz2): In-progress
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 5 (Complete=3, Pending=4, Fired=0, Skipped=0, Incomplete=24, Source=/var/lib/pengine/pe-input-5.bz2): In-progress
drbd(p_Device_drive:1)[44735]:	2012/09/10_15:22:07 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:22:07 Cluster-Server-2 lrmd: [40194]: WARN: Managed LVM_drive:monitor process 44736 exited with return code 7.
Sep 10 15:22:07 Cluster-Server-2 lrmd: [40194]: info: operation monitor[10] on LVM_drive for client 40197: pid 44736 exited with return code 7
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: do_update_resource: Updating resouce LVM_drive after complete monitor op (interval=0)
Sep 10 15:22:07 Cluster-Server-2 crm_attribute: [44800]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:22:07 Cluster-Server-2 crm_attribute: [44800]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:22:07 Cluster-Server-2 crm_attribute: [44800]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:22:07 Cluster-Server-2 crm_attribute: [44800]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:22:07 Cluster-Server-2 crm_attribute: [44800]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:22:07 Cluster-Server-2 crm_attribute: [44800]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:22:07 Cluster-Server-2 crm_attribute: [44800]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:22:07 Cluster-Server-2 crm_attribute: [44800]: info: determine_host: Mapped Cluster-Server-2 to Cluster-Server-2
Sep 10 15:22:07 Cluster-Server-2 crm_attribute: [44800]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:22:07 Cluster-Server-2 crm_attribute: [44800]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:22:07 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:1=10000
Sep 10 15:22:07 Cluster-Server-2 attrd: [40195]: info: find_hash_entry: Creating hash entry for master-p_Device_drive:1
Sep 10 15:22:07 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 10000, Current: (null), Stored: (null)
Sep 10 15:22:07 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: New value of master-p_Device_drive:1 is 10000
Sep 10 15:22:07 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_Device_drive:1 (10000)
Sep 10 15:22:07 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='master-p_Device_drive:1'] does not exist
Sep 10 15:22:07 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for /cib (/cib)
Sep 10 15:22:07 Cluster-Server-2 attrd: [40195]: notice: attrd_perform_update: Sent update 38: master-p_Device_drive:1=10000
Sep 10 15:22:07 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 38 for master-p_Device_drive:1=10000 passed
Sep 10 15:22:07 Cluster-Server-2 crm_attribute: [44800]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:1=10000 for Cluster-Server-2
Sep 10 15:22:07 Cluster-Server-2 crm_attribute: [44800]: info: attrd_lazy_update: Updated master-p_Device_drive:1=10000 for Cluster-Server-2
Sep 10 15:22:07 Cluster-Server-2 crm_attribute: [44800]: info: main: Update master-p_Device_drive:1=10000 sent via attrd
Sep 10 15:22:07 Cluster-Server-2 crm_attribute: [44800]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:22:07 Cluster-Server-2 crm_attribute: [44800]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:22:07 Cluster-Server-2 attrd: [40195]: info: find_hash_entry: Creating hash entry for master-p_Device_drive:0
Sep 10 15:22:07 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='master-p_Device_drive:0'] does not exist
Sep 10 15:22:07 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update -22 for master-p_Device_drive:0=(null) passed
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: info: process_lrm_event: LRM operation LVM_drive_monitor_0 (call=10, rc=7, cib-update=108, confirmed=true) not running
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: update_history_cache: Appending monitor op to history for 'LVM_drive'
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.7.28 -> 0.7.29 (S_TRANSITION_ENGINE)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:176 - Triggered transition abort (complete=0, tag=nvpair, id=status-Cluster-Server-2-master-p_Device_drive.1, name=master-p_Device_drive:1, value=10000, magic=NA, cib=0.7.29) : Transient attribute: update
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause <nvpair id="status-Cluster-Server-2-master-p_Device_drive.1" name="master-p_Device_drive:1" value="10000" __crm_diff_marker__="added:top" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: update_abort_priority: Abort priority upgraded from 0 to 1000000
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: update_abort_priority: Abort action done superceeded by restart
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.7.29 -> 0.7.30 (S_TRANSITION_ENGINE)
drbd(p_Device_drive:1)[44735]:	2012/09/10_15:22:07 DEBUG: drive: Exit code 0
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:176 - Triggered transition abort (complete=0, tag=nvpair, id=status-Cluster-Server-1-master-p_Device_drive.0, name=master-p_Device_drive:0, value=10000, magic=NA, cib=0.7.30) : Transient attribute: update
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause <nvpair id="status-Cluster-Server-1-master-p_Device_drive.0" name="master-p_Device_drive:0" value="10000" __crm_diff_marker__="added:top" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.7.30 -> 0.7.31 (S_TRANSITION_ENGINE)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action LVM_drive_monitor_0 (14) confirmed on Cluster-Server-2 (rc=0)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 5 (Complete=4, Pending=3, Fired=0, Skipped=12, Incomplete=12, Source=/var/lib/pengine/pe-input-5.bz2): In-progress
drbd(p_Device_drive:1)[44735]:	2012/09/10_15:22:07 DEBUG: drive: Command output: 
Sep 10 15:22:07 Cluster-Server-2 lrmd: [40194]: debug: RA output: (p_Device_drive:1:monitor:stdout) 

Sep 10 15:22:07 Cluster-Server-2 lrmd: [40194]: info: Managed p_Device_drive:1:monitor process 44735 exited with return code 0.
Sep 10 15:22:07 Cluster-Server-2 lrmd: [40194]: info: operation monitor[9] on p_Device_drive:1 for client 40197: pid 44735 exited with return code 0
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: do_update_resource: Updating resouce p_Device_drive:1 after complete monitor op (interval=0)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: info: process_lrm_event: LRM operation p_Device_drive:1_monitor_0 (call=9, rc=0, cib-update=109, confirmed=true) ok
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: update_history_cache: Appending monitor op to history for 'p_Device_drive:1'
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.7.31 -> 0.7.32 (S_TRANSITION_ENGINE)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: WARN: status_from_rc: Action 11 (LVM_drive_monitor_0) on Cluster-Server-1 failed (target: 7 vs. rc: 0): Error
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: match_graph_event:277 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=LVM_drive_last_failure_0, magic=0:0;11:5:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66, cib=0.7.32) : Event failed
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action LVM_drive_monitor_0 (11) confirmed on Cluster-Server-1 (rc=4)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 5 (Complete=5, Pending=2, Fired=0, Skipped=12, Incomplete=12, Source=/var/lib/pengine/pe-input-5.bz2): In-progress
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.7.32 -> 0.7.33 (S_TRANSITION_ENGINE)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: WARN: status_from_rc: Action 13 (p_Device_drive:1_monitor_0) on Cluster-Server-2 failed (target: 7 vs. rc: 0): Error
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: match_graph_event:277 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_Device_drive:1_last_failure_0, magic=0:0;13:5:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66, cib=0.7.33) : Event failed
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action p_Device_drive:1_monitor_0 (13) confirmed on Cluster-Server-2 (rc=4)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 12: probe_complete probe_complete on Cluster-Server-2 (local) - no waiting
Sep 10 15:22:07 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from crmd: probe_complete=true
Sep 10 15:22:07 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: true, Current: true, Stored: true
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: attrd_update_delegate: Sent update: probe_complete=true for localhost
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 5 (Complete=6, Pending=1, Fired=1, Skipped=12, Incomplete=11, Source=/var/lib/pengine/pe-input-5.bz2): In-progress
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 5 (Complete=7, Pending=1, Fired=0, Skipped=12, Incomplete=11, Source=/var/lib/pengine/pe-input-5.bz2): In-progress
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.7.33 -> 0.7.34 (S_TRANSITION_ENGINE)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: WARN: status_from_rc: Action 10 (p_Device_drive:0_monitor_0) on Cluster-Server-1 failed (target: 7 vs. rc: 8): Error
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: match_graph_event:277 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_Device_drive:0_last_failure_0, magic=0:8;10:5:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66, cib=0.7.34) : Event failed
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action p_Device_drive:0_monitor_0 (10) confirmed on Cluster-Server-1 (rc=4)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 9: probe_complete probe_complete on Cluster-Server-1 - no waiting
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 5 (Complete=8, Pending=0, Fired=1, Skipped=12, Incomplete=10, Source=/var/lib/pengine/pe-input-5.bz2): In-progress
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: notice: run_graph: ==== Transition 5 (Complete=9, Pending=0, Fired=0, Skipped=12, Incomplete=10, Source=/var/lib/pengine/pe-input-5.bz2): Stopped
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: te_graph_trigger: Transition 5 is now complete
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Transition 5 status: restart - Transient attribute: update
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes are eligible to run resources.
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 110: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke_callback: Invoking the PE: query=110, ref=pe_calc-dc-1347283327-61, seq=312, quorate=1
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: unpack_config: Startup probes: enabled
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH timeout: 60000
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH of failed nodes is disabled
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Stop all active resources: false
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Default stickiness: 0
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: notice: unpack_config: On loss of CCM Quorum: Ignore
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: unpack_domains: Unpacking domains
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-1 is online
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-2 is online
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: LVM_drive_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource LVM_drive active on Cluster-Server-1
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_Device_drive:0_last_failure_0 on Cluster-Server-1 returned 8 (master) instead of the expected value: 7 (not running)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Operation monitor found resource p_Device_drive:0 active in master mode on Cluster-Server-1
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_Device_drive:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: NFS_Server [p_NFS_Server]
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: iSCSI_Daemon [p_iSCSI_Daemon]
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: PingD [p_PingD]
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: clone_print:  Master/Slave Set: Device_drive [p_Device_drive]
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:0 active on Cluster-Server-1
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:0 active on Cluster-Server-1
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: short_print:      Masters: [ Cluster-Server-1 ]
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: short_print:      Slaves: [ Cluster-Server-2 ]
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: native_print: LVM_drive	(ocf::nas:LVM2):	Started Cluster-Server-1
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:0: preferring current location (node=Cluster-Server-1, weight=1)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:1: preferring current location (node=Cluster-Server-2, weight=1)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_NFS_Server:0
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_NFS_Server:1
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 NFS_Server instances of a possible 2
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_iSCSI_Daemon:0
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_iSCSI_Daemon:1
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 iSCSI_Daemon instances of a possible 2
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_PingD:0
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_PingD:1
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 PingD instances of a possible 2
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_Device_drive:0
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_Device_drive:1
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 Device_drive instances of a possible 2
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:0 master score: 10150
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: master_color: Promoting p_Device_drive:0 (Master Cluster-Server-1)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:1 master score: 10100
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: master_color: Device_drive: Promoted 1 instances of a possible 1 to master
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to LVM_drive
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: debug: master_create_actions: Creating actions for Device_drive
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: RecurringOp:  Start recurring monitor (10s) for p_Device_drive:0 on Cluster-Server-1
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: RecurringOp:  Start recurring monitor (20s) for p_Device_drive:1 on Cluster-Server-2
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: RecurringOp:  Start recurring monitor (10s) for p_Device_drive:0 on Cluster-Server-1
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: RecurringOp:  Start recurring monitor (20s) for p_Device_drive:1 on Cluster-Server-2
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:0	(Started Cluster-Server-1)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:1	(Started Cluster-Server-2)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:0	(Started Cluster-Server-1)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:1	(Started Cluster-Server-2)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:0	(Started Cluster-Server-1)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:1	(Started Cluster-Server-2)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_Device_drive:0	(Master Cluster-Server-1)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_Device_drive:1	(Slave Cluster-Server-2)
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   LVM_drive	(Started Cluster-Server-1)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_PE_SUCCESS from handle_response() received in state S_POLICY_ENGINE
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log <create_reply_adv origin="process_pe_message" t="crmd" version="3.0.6" subt="response" reference="pe_calc-dc-1347283327-61" crm_task="pe_calc" crm_sys_to="dc" crm_sys_from="pengine" crm-tgraph-in="/var/lib/pengine/pe-input-6.bz2" graph-errors="0" graph-warnings="0" config-errors="0" config-warnings="0" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   <crm_xml >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log     <transition_graph cluster-delay="60s" stonith-timeout="60s" failed-stop-offset="INFINITY" failed-start-offset="INFINITY" batch-limit="30" transition_id="6" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="0" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="39" operation="monitor" operation_key="p_Device_drive:0_monitor_10000" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="p_Device_drive:0" long-id="Device_drive:p_Device_drive:0" class="ocf" provider="linbit" type="drbd" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone="0" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_interval="10000" CRM_meta_master_max="1" CRM_meta_master_node_max="1" CRM_meta_name="monitor" CRM_meta_notify="true" CRM_meta_op_target_rc="8" CRM_meta_role="Master" CRM_meta_timeout="20000" crm_feature_set="3.0.6" drbd_resource="drive" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="1" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="43" operation="monitor" operation_key="p_Device_drive:1_monitor_20000" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="p_Device_drive:1" long-id="Device_drive:p_Device_drive:1" class="ocf" provider="linbit" type="drbd" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_clone="1" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_interval="20000" CRM_meta_master_max="1" CRM_meta_master_node_max="1" CRM_meta_name="monitor" CRM_meta_notify="true" CRM_meta_role="Slave" CRM_meta_timeout="20000" crm_feature_set="3.0.6" drbd_resource="drive" />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log     </transition_graph>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   </crm_xml>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log </create_reply_adv>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_INVOKE
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: unpack_graph: Unpacked transition 6: 2 actions in 2 synapses
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: info: do_te_invoke: Processing graph 6 (ref=pe_calc-dc-1347283327-61) derived from /var/lib/pengine/pe-input-6.bz2
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 39: monitor p_Device_drive:0_monitor_10000 on Cluster-Server-1
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 43: monitor p_Device_drive:1_monitor_20000 on Cluster-Server-2 (local)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_lrm_rsc_op: Performing key=43:6:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=p_Device_drive:1_monitor_20000
Sep 10 15:22:07 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op: add an operation operation monitor[11] on p_Device_drive:1 for client 40197, its parameters: drbd_resource=[drive] CRM_meta_role=[Slave] CRM_meta_timeout=[20000] CRM_meta_name=[monitor] CRM_meta_clone_node_max=[1] CRM_meta_notify=[true] crm_feature_set=[3.0.6] CRM_meta_clone=[1] CRM_meta_clone_max=[2] CRM_meta_master_node_max=[1] CRM_meta_interval=[20000] CRM_meta_globally_unique=[false] CRM_meta_master_max=[1]  to the operation list.
Sep 10 15:22:07 Cluster-Server-2 lrmd: [40194]: info: rsc:p_Device_drive:1 monitor[11] (pid 44818)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 6 (Complete=0, Pending=2, Fired=2, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Sep 10 15:22:07 Cluster-Server-2 pengine: [40196]: notice: process_pe_message: Transition 6: PEngine Input stored in: /var/lib/pengine/pe-input-6.bz2
drbd(p_Device_drive:1)[44818]:	2012/09/10_15:22:07 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:22:07 Cluster-Server-2 crm_attribute: [44848]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:22:07 Cluster-Server-2 crm_attribute: [44848]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:22:07 Cluster-Server-2 crm_attribute: [44848]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:22:07 Cluster-Server-2 crm_attribute: [44848]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:22:07 Cluster-Server-2 crm_attribute: [44848]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:22:07 Cluster-Server-2 crm_attribute: [44848]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:22:07 Cluster-Server-2 crm_attribute: [44848]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:22:07 Cluster-Server-2 crm_attribute: [44848]: info: determine_host: Mapped Cluster-Server-2 to Cluster-Server-2
Sep 10 15:22:07 Cluster-Server-2 crm_attribute: [44848]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:22:07 Cluster-Server-2 crm_attribute: [44848]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:22:07 Cluster-Server-2 crm_attribute: [44848]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:1=10000 for Cluster-Server-2
Sep 10 15:22:07 Cluster-Server-2 crm_attribute: [44848]: info: attrd_lazy_update: Updated master-p_Device_drive:1=10000 for Cluster-Server-2
Sep 10 15:22:07 Cluster-Server-2 crm_attribute: [44848]: info: main: Update master-p_Device_drive:1=10000 sent via attrd
Sep 10 15:22:07 Cluster-Server-2 crm_attribute: [44848]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:22:07 Cluster-Server-2 crm_attribute: [44848]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:22:07 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:1=10000
Sep 10 15:22:07 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:1)[44818]:	2012/09/10_15:22:07 DEBUG: drive: Exit code 0
drbd(p_Device_drive:1)[44818]:	2012/09/10_15:22:07 DEBUG: drive: Command output: 
Sep 10 15:22:07 Cluster-Server-2 lrmd: [40194]: debug: RA output: (p_Device_drive:1:monitor:stdout) 

Sep 10 15:22:07 Cluster-Server-2 lrmd: [40194]: info: Managed p_Device_drive:1:monitor process 44818 exited with return code 0.
Sep 10 15:22:07 Cluster-Server-2 lrmd: [40194]: info: operation monitor[11] on p_Device_drive:1 for client 40197: pid 44818 exited with return code 0
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: do_update_resource: Updating resouce p_Device_drive:1 after complete monitor op (interval=20000)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: info: process_lrm_event: LRM operation p_Device_drive:1_monitor_20000 (call=11, rc=0, cib-update=111, confirmed=false) ok
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: update_history_cache: Appending monitor op to history for 'p_Device_drive:1'
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.7.34 -> 0.7.35 (S_TRANSITION_ENGINE)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action p_Device_drive:1_monitor_20000 (43) confirmed on Cluster-Server-2 (rc=0)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 6 (Complete=1, Pending=1, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-6.bz2): In-progress
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.7.35 -> 0.7.36 (S_TRANSITION_ENGINE)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action p_Device_drive:0_monitor_10000 (39) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: notice: run_graph: ==== Transition 6 (Complete=2, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-6.bz2): Complete
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: te_graph_trigger: Transition 6 is now complete
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Transition 6 status: done - <null>
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_TE_SUCCESS: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_TE_SUCCESS from notify_crmd() received in state S_TRANSITION_ENGINE
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: Starting PEngine Recheck Timer
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started PEngine Recheck Timer (I_PE_CALC:900000ms), src=135
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:22:07 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:22:09 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_PingD:0 monitor[6] (pid 51081)
Sep 10 15:22:09 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_PingD:1 monitor[6] (pid 44918)
Sep 10 15:22:11 Cluster-Server-1 attrd_updater: [51104]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:22:11 Cluster-Server-1 attrd_updater: [51104]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:22:11 Cluster-Server-1 attrd_updater: [51104]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:22:11 Cluster-Server-1 attrd_updater: [51104]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:22:11 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:22:11 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:22:11 Cluster-Server-1 lrmd: [48712]: info: operation monitor[6] on p_PingD:0 for client 48715: pid 51081 exited with return code 0
Sep 10 15:22:11 Cluster-Server-2 attrd_updater: [45289]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:22:11 Cluster-Server-2 attrd_updater: [45289]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:22:11 Cluster-Server-2 attrd_updater: [45289]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:22:11 Cluster-Server-2 attrd_updater: [45289]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:22:11 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:22:11 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:22:11 Cluster-Server-2 lrmd: [40194]: info: operation monitor[6] on p_PingD:1 for client 40197: pid 44918 exited with return code 0
Sep 10 15:22:15 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_iSCSI_Daemon:0 monitor[5] (pid 51105)
Sep 10 15:22:15 Cluster-Server-1 lrmd: [48712]: info: operation monitor[5] on p_iSCSI_Daemon:0 for client 48715: pid 51105 exited with return code 0
Sep 10 15:22:15 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_NFS_Server:0 monitor[8] (pid 51109)
Sep 10 15:22:15 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_NFS_Server:0:monitor:stdout) nfsd running

Sep 10 15:22:15 Cluster-Server-1 lrmd: [48712]: info: operation monitor[8] on p_NFS_Server:0 for client 48715: pid 51109 exited with return code 0
Sep 10 15:22:15 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_iSCSI_Daemon:1 monitor[5] (pid 45570)
Sep 10 15:22:15 Cluster-Server-2 lrmd: [40194]: info: operation monitor[5] on p_iSCSI_Daemon:1 for client 40197: pid 45570 exited with return code 0
Sep 10 15:22:15 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_NFS_Server:1 monitor[8] (pid 45574)
Sep 10 15:22:15 Cluster-Server-2 lrmd: [40194]: debug: RA output: (p_NFS_Server:1:monitor:stdout) nfsd running

Sep 10 15:22:15 Cluster-Server-2 lrmd: [40194]: info: operation monitor[8] on p_NFS_Server:1 for client 40197: pid 45574 exited with return code 0
Sep 10 15:22:17 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_Device_drive:0 monitor[11] (pid 51111)
drbd(p_Device_drive:0)[51111]:	2012/09/10_15:22:17 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:22:17 Cluster-Server-1 crm_attribute: [51141]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:22:17 Cluster-Server-1 crm_attribute: [51141]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:22:17 Cluster-Server-1 crm_attribute: [51141]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:22:17 Cluster-Server-1 crm_attribute: [51141]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:22:17 Cluster-Server-1 crm_attribute: [51141]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:22:17 Cluster-Server-1 crm_attribute: [51141]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:22:17 Cluster-Server-1 crm_attribute: [51141]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:22:17 Cluster-Server-1 crm_attribute: [51141]: info: determine_host: Mapped Cluster-Server-1 to Cluster-Server-1
Sep 10 15:22:17 Cluster-Server-1 crm_attribute: [51141]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:22:17 Cluster-Server-1 crm_attribute: [51141]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:22:17 Cluster-Server-1 crm_attribute: [51141]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:22:17 Cluster-Server-1 crm_attribute: [51141]: info: attrd_lazy_update: Updated master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:22:17 Cluster-Server-1 crm_attribute: [51141]: info: main: Update master-p_Device_drive:0=10000 sent via attrd
Sep 10 15:22:17 Cluster-Server-1 crm_attribute: [51141]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:22:17 Cluster-Server-1 crm_attribute: [51141]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:22:17 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:0=10000
Sep 10 15:22:17 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:0)[51111]:	2012/09/10_15:22:17 DEBUG: drive: Exit code 0
drbd(p_Device_drive:0)[51111]:	2012/09/10_15:22:17 DEBUG: drive: Command output: 
Sep 10 15:22:17 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_Device_drive:0:monitor:stdout) 

Sep 10 15:22:17 Cluster-Server-1 lrmd: [48712]: info: operation monitor[11] on p_Device_drive:0 for client 48715: pid 51111 exited with return code 8
Sep 10 15:22:21 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_PingD:0 monitor[6] (pid 51148)
Sep 10 15:22:21 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_PingD:1 monitor[6] (pid 45996)
Sep 10 15:22:23 Cluster-Server-1 attrd_updater: [51166]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:22:23 Cluster-Server-1 attrd_updater: [51166]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:22:23 Cluster-Server-1 attrd_updater: [51166]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:22:23 Cluster-Server-1 attrd_updater: [51166]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:22:23 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:22:23 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:22:23 Cluster-Server-1 lrmd: [48712]: info: operation monitor[6] on p_PingD:0 for client 48715: pid 51148 exited with return code 0
Sep 10 15:22:23 Cluster-Server-2 attrd_updater: [46047]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:22:23 Cluster-Server-2 attrd_updater: [46047]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:22:23 Cluster-Server-2 attrd_updater: [46047]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:22:23 Cluster-Server-2 attrd_updater: [46047]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:22:23 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:22:23 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:22:23 Cluster-Server-2 lrmd: [40194]: info: operation monitor[6] on p_PingD:1 for client 40197: pid 45996 exited with return code 0
Sep 10 15:22:27 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_Device_drive:0 monitor[11] (pid 51167)
drbd(p_Device_drive:0)[51167]:	2012/09/10_15:22:27 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:22:27 Cluster-Server-1 crm_attribute: [51197]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:22:27 Cluster-Server-1 crm_attribute: [51197]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:22:27 Cluster-Server-1 crm_attribute: [51197]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:22:27 Cluster-Server-1 crm_attribute: [51197]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:22:27 Cluster-Server-1 crm_attribute: [51197]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:22:27 Cluster-Server-1 crm_attribute: [51197]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:22:27 Cluster-Server-1 crm_attribute: [51197]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:22:27 Cluster-Server-1 crm_attribute: [51197]: info: determine_host: Mapped Cluster-Server-1 to Cluster-Server-1
Sep 10 15:22:27 Cluster-Server-1 crm_attribute: [51197]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:22:27 Cluster-Server-1 crm_attribute: [51197]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:22:27 Cluster-Server-1 crm_attribute: [51197]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:22:27 Cluster-Server-1 crm_attribute: [51197]: info: attrd_lazy_update: Updated master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:22:27 Cluster-Server-1 crm_attribute: [51197]: info: main: Update master-p_Device_drive:0=10000 sent via attrd
Sep 10 15:22:27 Cluster-Server-1 crm_attribute: [51197]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:22:27 Cluster-Server-1 crm_attribute: [51197]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:22:27 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:0=10000
Sep 10 15:22:27 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:0)[51167]:	2012/09/10_15:22:27 DEBUG: drive: Exit code 0
drbd(p_Device_drive:0)[51167]:	2012/09/10_15:22:27 DEBUG: drive: Command output: 
Sep 10 15:22:27 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_Device_drive:0:monitor:stdout) 

Sep 10 15:22:27 Cluster-Server-1 lrmd: [48712]: info: operation monitor[11] on p_Device_drive:0 for client 48715: pid 51167 exited with return code 8
Sep 10 15:22:27 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_Device_drive:1 monitor[11] (pid 46370)
drbd(p_Device_drive:1)[46370]:	2012/09/10_15:22:27 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:22:27 Cluster-Server-2 crm_attribute: [46400]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:22:27 Cluster-Server-2 crm_attribute: [46400]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:22:27 Cluster-Server-2 crm_attribute: [46400]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:22:27 Cluster-Server-2 crm_attribute: [46400]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:22:27 Cluster-Server-2 crm_attribute: [46400]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:22:27 Cluster-Server-2 crm_attribute: [46400]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:22:27 Cluster-Server-2 crm_attribute: [46400]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:22:27 Cluster-Server-2 crm_attribute: [46400]: info: determine_host: Mapped Cluster-Server-2 to Cluster-Server-2
Sep 10 15:22:27 Cluster-Server-2 crm_attribute: [46400]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:22:27 Cluster-Server-2 crm_attribute: [46400]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:22:27 Cluster-Server-2 crm_attribute: [46400]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:1=10000 for Cluster-Server-2
Sep 10 15:22:27 Cluster-Server-2 crm_attribute: [46400]: info: attrd_lazy_update: Updated master-p_Device_drive:1=10000 for Cluster-Server-2
Sep 10 15:22:27 Cluster-Server-2 crm_attribute: [46400]: info: main: Update master-p_Device_drive:1=10000 sent via attrd
Sep 10 15:22:27 Cluster-Server-2 crm_attribute: [46400]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:22:27 Cluster-Server-2 crm_attribute: [46400]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:22:27 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:1=10000
Sep 10 15:22:27 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:1)[46370]:	2012/09/10_15:22:27 DEBUG: drive: Exit code 0
drbd(p_Device_drive:1)[46370]:	2012/09/10_15:22:27 DEBUG: drive: Command output: 
Sep 10 15:22:27 Cluster-Server-2 lrmd: [40194]: debug: RA output: (p_Device_drive:1:monitor:stdout) 

Sep 10 15:22:27 Cluster-Server-2 lrmd: [40194]: info: operation monitor[11] on p_Device_drive:1 for client 40197: pid 46370 exited with return code 0
Sep 10 15:22:33 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_PingD:0 monitor[6] (pid 51204)
Sep 10 15:22:33 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_PingD:1 monitor[6] (pid 46863)
Sep 10 15:22:35 Cluster-Server-1 attrd_updater: [51222]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:22:35 Cluster-Server-1 attrd_updater: [51222]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:22:35 Cluster-Server-1 attrd_updater: [51222]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:22:35 Cluster-Server-1 attrd_updater: [51222]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:22:35 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:22:35 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:22:35 Cluster-Server-1 lrmd: [48712]: info: operation monitor[6] on p_PingD:0 for client 48715: pid 51204 exited with return code 0
Sep 10 15:22:35 Cluster-Server-2 attrd_updater: [47138]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:22:35 Cluster-Server-2 attrd_updater: [47138]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:22:35 Cluster-Server-2 attrd_updater: [47138]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:22:35 Cluster-Server-2 attrd_updater: [47138]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:22:35 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:22:35 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:22:35 Cluster-Server-2 lrmd: [40194]: info: operation monitor[6] on p_PingD:1 for client 40197: pid 46863 exited with return code 0
Sep 10 15:22:37 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_Device_drive:0 monitor[11] (pid 51223)
drbd(p_Device_drive:0)[51223]:	2012/09/10_15:22:37 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:22:37 Cluster-Server-1 crm_attribute: [51253]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:22:37 Cluster-Server-1 crm_attribute: [51253]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:22:37 Cluster-Server-1 crm_attribute: [51253]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:22:37 Cluster-Server-1 crm_attribute: [51253]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:22:37 Cluster-Server-1 crm_attribute: [51253]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:22:37 Cluster-Server-1 crm_attribute: [51253]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:22:37 Cluster-Server-1 crm_attribute: [51253]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:22:37 Cluster-Server-1 crm_attribute: [51253]: info: determine_host: Mapped Cluster-Server-1 to Cluster-Server-1
Sep 10 15:22:37 Cluster-Server-1 crm_attribute: [51253]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:22:37 Cluster-Server-1 crm_attribute: [51253]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:22:37 Cluster-Server-1 crm_attribute: [51253]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:22:37 Cluster-Server-1 crm_attribute: [51253]: info: attrd_lazy_update: Updated master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:22:37 Cluster-Server-1 crm_attribute: [51253]: info: main: Update master-p_Device_drive:0=10000 sent via attrd
Sep 10 15:22:37 Cluster-Server-1 crm_attribute: [51253]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:22:37 Cluster-Server-1 crm_attribute: [51253]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:22:37 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:0=10000
Sep 10 15:22:37 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:0)[51223]:	2012/09/10_15:22:37 DEBUG: drive: Exit code 0
drbd(p_Device_drive:0)[51223]:	2012/09/10_15:22:37 DEBUG: drive: Command output: 
Sep 10 15:22:37 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_Device_drive:0:monitor:stdout) 

Sep 10 15:22:37 Cluster-Server-1 lrmd: [48712]: info: operation monitor[11] on p_Device_drive:0 for client 48715: pid 51223 exited with return code 8
Sep 10 15:22:45 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_iSCSI_Daemon:0 monitor[5] (pid 51332)
Sep 10 15:22:45 Cluster-Server-1 lrmd: [48712]: info: operation monitor[5] on p_iSCSI_Daemon:0 for client 48715: pid 51332 exited with return code 0
Sep 10 15:22:45 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_NFS_Server:0 monitor[8] (pid 51336)
Sep 10 15:22:45 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_NFS_Server:0:monitor:stdout) nfsd running

Sep 10 15:22:45 Cluster-Server-1 lrmd: [48712]: info: operation monitor[8] on p_NFS_Server:0 for client 48715: pid 51336 exited with return code 0
Sep 10 15:22:45 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_PingD:0 monitor[6] (pid 51338)
Sep 10 15:22:45 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_iSCSI_Daemon:1 monitor[5] (pid 47940)
Sep 10 15:22:45 Cluster-Server-2 lrmd: [40194]: info: operation monitor[5] on p_iSCSI_Daemon:1 for client 40197: pid 47940 exited with return code 0
Sep 10 15:22:45 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_NFS_Server:1 monitor[8] (pid 47944)
Sep 10 15:22:45 Cluster-Server-2 lrmd: [40194]: debug: RA output: (p_NFS_Server:1:monitor:stdout) nfsd running

Sep 10 15:22:45 Cluster-Server-2 lrmd: [40194]: info: operation monitor[8] on p_NFS_Server:1 for client 40197: pid 47944 exited with return code 0
Sep 10 15:22:45 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_PingD:1 monitor[6] (pid 47946)
Sep 10 15:22:47 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_Device_drive:0 monitor[11] (pid 51357)
drbd(p_Device_drive:0)[51357]:	2012/09/10_15:22:47 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:22:47 Cluster-Server-1 crm_attribute: [51387]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:22:47 Cluster-Server-1 crm_attribute: [51387]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:22:47 Cluster-Server-1 crm_attribute: [51387]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:22:47 Cluster-Server-1 crm_attribute: [51387]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:22:47 Cluster-Server-1 crm_attribute: [51387]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:22:47 Cluster-Server-1 crm_attribute: [51387]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:22:47 Cluster-Server-1 crm_attribute: [51387]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:22:47 Cluster-Server-1 crm_attribute: [51387]: info: determine_host: Mapped Cluster-Server-1 to Cluster-Server-1
Sep 10 15:22:47 Cluster-Server-1 crm_attribute: [51387]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:22:47 Cluster-Server-1 crm_attribute: [51387]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:22:47 Cluster-Server-1 crm_attribute: [51387]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:22:47 Cluster-Server-1 crm_attribute: [51387]: info: attrd_lazy_update: Updated master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:22:47 Cluster-Server-1 crm_attribute: [51387]: info: main: Update master-p_Device_drive:0=10000 sent via attrd
Sep 10 15:22:47 Cluster-Server-1 crm_attribute: [51387]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:22:47 Cluster-Server-1 crm_attribute: [51387]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:22:47 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:0=10000
Sep 10 15:22:47 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:0)[51357]:	2012/09/10_15:22:47 DEBUG: drive: Exit code 0
drbd(p_Device_drive:0)[51357]:	2012/09/10_15:22:47 DEBUG: drive: Command output: 
Sep 10 15:22:47 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_Device_drive:0:monitor:stdout) 

Sep 10 15:22:47 Cluster-Server-1 lrmd: [48712]: info: operation monitor[11] on p_Device_drive:0 for client 48715: pid 51357 exited with return code 8
Sep 10 15:22:47 Cluster-Server-1 attrd_updater: [51396]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:22:47 Cluster-Server-1 attrd_updater: [51396]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:22:47 Cluster-Server-1 attrd_updater: [51396]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:22:47 Cluster-Server-1 attrd_updater: [51396]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:22:47 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:22:47 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:22:47 Cluster-Server-1 lrmd: [48712]: info: operation monitor[6] on p_PingD:0 for client 48715: pid 51338 exited with return code 0
Sep 10 15:22:47 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_Device_drive:1 monitor[11] (pid 48101)
drbd(p_Device_drive:1)[48101]:	2012/09/10_15:22:47 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:22:47 Cluster-Server-2 crm_attribute: [48131]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:22:47 Cluster-Server-2 crm_attribute: [48131]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:22:47 Cluster-Server-2 crm_attribute: [48131]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:22:47 Cluster-Server-2 crm_attribute: [48131]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:22:47 Cluster-Server-2 crm_attribute: [48131]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:22:47 Cluster-Server-2 crm_attribute: [48131]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:22:47 Cluster-Server-2 crm_attribute: [48131]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:22:47 Cluster-Server-2 crm_attribute: [48131]: info: determine_host: Mapped Cluster-Server-2 to Cluster-Server-2
Sep 10 15:22:47 Cluster-Server-2 crm_attribute: [48131]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:22:47 Cluster-Server-2 crm_attribute: [48131]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:22:47 Cluster-Server-2 crm_attribute: [48131]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:1=10000 for Cluster-Server-2
Sep 10 15:22:47 Cluster-Server-2 crm_attribute: [48131]: info: attrd_lazy_update: Updated master-p_Device_drive:1=10000 for Cluster-Server-2
Sep 10 15:22:47 Cluster-Server-2 crm_attribute: [48131]: info: main: Update master-p_Device_drive:1=10000 sent via attrd
Sep 10 15:22:47 Cluster-Server-2 crm_attribute: [48131]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:22:47 Cluster-Server-2 crm_attribute: [48131]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:22:47 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:1=10000
Sep 10 15:22:47 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:1)[48101]:	2012/09/10_15:22:47 DEBUG: drive: Exit code 0
drbd(p_Device_drive:1)[48101]:	2012/09/10_15:22:47 DEBUG: drive: Command output: 
Sep 10 15:22:47 Cluster-Server-2 lrmd: [40194]: debug: RA output: (p_Device_drive:1:monitor:stdout) 

Sep 10 15:22:47 Cluster-Server-2 lrmd: [40194]: info: operation monitor[11] on p_Device_drive:1 for client 40197: pid 48101 exited with return code 0
Sep 10 15:22:47 Cluster-Server-2 attrd_updater: [48140]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:22:47 Cluster-Server-2 attrd_updater: [48140]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:22:47 Cluster-Server-2 attrd_updater: [48140]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:22:47 Cluster-Server-2 attrd_updater: [48140]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:22:47 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:22:47 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:22:47 Cluster-Server-2 lrmd: [40194]: info: operation monitor[6] on p_PingD:1 for client 40197: pid 47946 exited with return code 0
Sep 10 15:22:57 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_Device_drive:0 monitor[11] (pid 51457)
Sep 10 15:22:57 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_PingD:0 monitor[6] (pid 51460)
drbd(p_Device_drive:0)[51457]:	2012/09/10_15:22:57 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:22:57 Cluster-Server-1 crm_attribute: [51503]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:22:57 Cluster-Server-1 crm_attribute: [51503]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:22:57 Cluster-Server-1 crm_attribute: [51503]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:22:57 Cluster-Server-1 crm_attribute: [51503]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:22:57 Cluster-Server-1 crm_attribute: [51503]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:22:57 Cluster-Server-1 crm_attribute: [51503]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:22:57 Cluster-Server-1 crm_attribute: [51503]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:22:57 Cluster-Server-1 crm_attribute: [51503]: info: determine_host: Mapped Cluster-Server-1 to Cluster-Server-1
Sep 10 15:22:57 Cluster-Server-1 crm_attribute: [51503]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:22:57 Cluster-Server-1 crm_attribute: [51503]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:22:57 Cluster-Server-1 crm_attribute: [51503]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:22:57 Cluster-Server-1 crm_attribute: [51503]: info: attrd_lazy_update: Updated master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:22:57 Cluster-Server-1 crm_attribute: [51503]: info: main: Update master-p_Device_drive:0=10000 sent via attrd
Sep 10 15:22:57 Cluster-Server-1 crm_attribute: [51503]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:22:57 Cluster-Server-1 crm_attribute: [51503]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:22:57 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:0=10000
Sep 10 15:22:57 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:0)[51457]:	2012/09/10_15:22:57 DEBUG: drive: Exit code 0
drbd(p_Device_drive:0)[51457]:	2012/09/10_15:22:57 DEBUG: drive: Command output: 
Sep 10 15:22:57 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_Device_drive:0:monitor:stdout) 

Sep 10 15:22:57 Cluster-Server-1 lrmd: [48712]: info: operation monitor[11] on p_Device_drive:0 for client 48715: pid 51457 exited with return code 8
Sep 10 15:22:57 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_PingD:1 monitor[6] (pid 48845)
Sep 10 15:22:59 Cluster-Server-1 attrd_updater: [51512]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:22:59 Cluster-Server-1 attrd_updater: [51512]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:22:59 Cluster-Server-1 attrd_updater: [51512]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:22:59 Cluster-Server-1 attrd_updater: [51512]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:22:59 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:22:59 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:22:59 Cluster-Server-1 lrmd: [48712]: info: operation monitor[6] on p_PingD:0 for client 48715: pid 51460 exited with return code 0
Sep 10 15:22:59 Cluster-Server-2 attrd_updater: [48900]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:22:59 Cluster-Server-2 attrd_updater: [48900]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:22:59 Cluster-Server-2 attrd_updater: [48900]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:22:59 Cluster-Server-2 attrd_updater: [48900]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:22:59 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:22:59 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:22:59 Cluster-Server-2 lrmd: [40194]: info: operation monitor[6] on p_PingD:1 for client 40197: pid 48845 exited with return code 0
Sep 10 15:23:07 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_Device_drive:0 monitor[11] (pid 51513)
drbd(p_Device_drive:0)[51513]:	2012/09/10_15:23:07 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:23:07 Cluster-Server-1 crm_attribute: [51543]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:23:07 Cluster-Server-1 crm_attribute: [51543]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:23:07 Cluster-Server-1 crm_attribute: [51543]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:23:07 Cluster-Server-1 crm_attribute: [51543]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:23:07 Cluster-Server-1 crm_attribute: [51543]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:23:07 Cluster-Server-1 crm_attribute: [51543]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:23:07 Cluster-Server-1 crm_attribute: [51543]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:23:07 Cluster-Server-1 crm_attribute: [51543]: info: determine_host: Mapped Cluster-Server-1 to Cluster-Server-1
Sep 10 15:23:07 Cluster-Server-1 crm_attribute: [51543]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:23:07 Cluster-Server-1 crm_attribute: [51543]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:23:07 Cluster-Server-1 crm_attribute: [51543]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:23:07 Cluster-Server-1 crm_attribute: [51543]: info: attrd_lazy_update: Updated master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:23:07 Cluster-Server-1 crm_attribute: [51543]: info: main: Update master-p_Device_drive:0=10000 sent via attrd
Sep 10 15:23:07 Cluster-Server-1 crm_attribute: [51543]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:23:07 Cluster-Server-1 crm_attribute: [51543]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:23:07 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:0=10000
Sep 10 15:23:07 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:0)[51513]:	2012/09/10_15:23:07 DEBUG: drive: Exit code 0
drbd(p_Device_drive:0)[51513]:	2012/09/10_15:23:07 DEBUG: drive: Command output: 
Sep 10 15:23:07 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_Device_drive:0:monitor:stdout) 

Sep 10 15:23:07 Cluster-Server-1 lrmd: [48712]: info: operation monitor[11] on p_Device_drive:0 for client 48715: pid 51513 exited with return code 8
Sep 10 15:23:07 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_Device_drive:1 monitor[11] (pid 49623)
drbd(p_Device_drive:1)[49623]:	2012/09/10_15:23:07 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:23:07 Cluster-Server-2 crm_attribute: [49653]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:23:07 Cluster-Server-2 crm_attribute: [49653]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:23:07 Cluster-Server-2 crm_attribute: [49653]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:23:07 Cluster-Server-2 crm_attribute: [49653]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:23:07 Cluster-Server-2 crm_attribute: [49653]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:23:07 Cluster-Server-2 crm_attribute: [49653]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:23:07 Cluster-Server-2 crm_attribute: [49653]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:23:07 Cluster-Server-2 crm_attribute: [49653]: info: determine_host: Mapped Cluster-Server-2 to Cluster-Server-2
Sep 10 15:23:07 Cluster-Server-2 crm_attribute: [49653]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:23:07 Cluster-Server-2 crm_attribute: [49653]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:23:07 Cluster-Server-2 crm_attribute: [49653]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:1=10000 for Cluster-Server-2
Sep 10 15:23:07 Cluster-Server-2 crm_attribute: [49653]: info: attrd_lazy_update: Updated master-p_Device_drive:1=10000 for Cluster-Server-2
Sep 10 15:23:07 Cluster-Server-2 crm_attribute: [49653]: info: main: Update master-p_Device_drive:1=10000 sent via attrd
Sep 10 15:23:07 Cluster-Server-2 crm_attribute: [49653]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:23:07 Cluster-Server-2 crm_attribute: [49653]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:23:07 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:1=10000
Sep 10 15:23:07 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:1)[49623]:	2012/09/10_15:23:07 DEBUG: drive: Exit code 0
drbd(p_Device_drive:1)[49623]:	2012/09/10_15:23:07 DEBUG: drive: Command output: 
Sep 10 15:23:07 Cluster-Server-2 lrmd: [40194]: debug: RA output: (p_Device_drive:1:monitor:stdout) 

Sep 10 15:23:07 Cluster-Server-2 lrmd: [40194]: info: operation monitor[11] on p_Device_drive:1 for client 40197: pid 49623 exited with return code 0
Sep 10 15:23:09 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_PingD:0 monitor[6] (pid 51551)
Sep 10 15:23:09 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_PingD:1 monitor[6] (pid 49693)
Sep 10 15:23:11 Cluster-Server-1 attrd_updater: [51570]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:23:11 Cluster-Server-1 attrd_updater: [51570]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:23:11 Cluster-Server-1 attrd_updater: [51570]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:23:11 Cluster-Server-1 attrd_updater: [51570]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:23:11 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:23:11 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:23:11 Cluster-Server-1 lrmd: [48712]: info: operation monitor[6] on p_PingD:0 for client 48715: pid 51551 exited with return code 0
Sep 10 15:23:11 Cluster-Server-2 attrd_updater: [49958]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:23:11 Cluster-Server-2 attrd_updater: [49958]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:23:11 Cluster-Server-2 attrd_updater: [49958]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:23:11 Cluster-Server-2 attrd_updater: [49958]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:23:11 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:23:11 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:23:11 Cluster-Server-2 attrd: [40195]: debug: xmlfromIPC: Peer disconnected
Sep 10 15:23:11 Cluster-Server-2 lrmd: [40194]: info: operation monitor[6] on p_PingD:1 for client 40197: pid 49693 exited with return code 0
Sep 10 15:23:15 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_iSCSI_Daemon:0 monitor[5] (pid 51573)
Sep 10 15:23:15 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_NFS_Server:0 monitor[8] (pid 51576)
Sep 10 15:23:15 Cluster-Server-1 lrmd: [48712]: info: operation monitor[5] on p_iSCSI_Daemon:0 for client 48715: pid 51573 exited with return code 0
Sep 10 15:23:15 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_NFS_Server:0:monitor:stdout) nfsd running

Sep 10 15:23:15 Cluster-Server-1 lrmd: [48712]: info: operation monitor[8] on p_NFS_Server:0 for client 48715: pid 51576 exited with return code 0
Sep 10 15:23:15 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_iSCSI_Daemon:1 monitor[5] (pid 50345)
Sep 10 15:23:15 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_NFS_Server:1 monitor[8] (pid 50346)
Sep 10 15:23:15 Cluster-Server-2 lrmd: [40194]: info: operation monitor[5] on p_iSCSI_Daemon:1 for client 40197: pid 50345 exited with return code 0
Sep 10 15:23:15 Cluster-Server-2 lrmd: [40194]: info: operation monitor[8] on p_NFS_Server:1 for client 40197: pid 50346 exited with return code 0
Sep 10 15:23:15 Cluster-Server-2 lrmd: [40194]: debug: RA output: (p_NFS_Server:1:monitor:stdout) nfsd running

Sep 10 15:23:17 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_Device_drive:0 monitor[11] (pid 51579)
drbd(p_Device_drive:0)[51579]:	2012/09/10_15:23:17 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:23:17 Cluster-Server-1 crm_attribute: [51609]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:23:17 Cluster-Server-1 crm_attribute: [51609]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:23:17 Cluster-Server-1 crm_attribute: [51609]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:23:17 Cluster-Server-1 crm_attribute: [51609]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:23:17 Cluster-Server-1 crm_attribute: [51609]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:23:17 Cluster-Server-1 crm_attribute: [51609]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:23:17 Cluster-Server-1 crm_attribute: [51609]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:23:17 Cluster-Server-1 crm_attribute: [51609]: info: determine_host: Mapped Cluster-Server-1 to Cluster-Server-1
Sep 10 15:23:17 Cluster-Server-1 crm_attribute: [51609]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:23:17 Cluster-Server-1 crm_attribute: [51609]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:23:17 Cluster-Server-1 crm_attribute: [51609]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:23:17 Cluster-Server-1 crm_attribute: [51609]: info: attrd_lazy_update: Updated master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:23:17 Cluster-Server-1 crm_attribute: [51609]: info: main: Update master-p_Device_drive:0=10000 sent via attrd
Sep 10 15:23:17 Cluster-Server-1 crm_attribute: [51609]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:23:17 Cluster-Server-1 crm_attribute: [51609]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:23:17 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:0=10000
Sep 10 15:23:17 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:0)[51579]:	2012/09/10_15:23:17 DEBUG: drive: Exit code 0
drbd(p_Device_drive:0)[51579]:	2012/09/10_15:23:17 DEBUG: drive: Command output: 
Sep 10 15:23:17 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_Device_drive:0:monitor:stdout) 

Sep 10 15:23:17 Cluster-Server-1 lrmd: [48712]: info: operation monitor[11] on p_Device_drive:0 for client 48715: pid 51579 exited with return code 8
Sep 10 15:23:21 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_PingD:0 monitor[6] (pid 51618)
Sep 10 15:23:21 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_PingD:1 monitor[6] (pid 50664)
Sep 10 15:23:23 Cluster-Server-1 attrd_updater: [51636]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:23:23 Cluster-Server-1 attrd_updater: [51636]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:23:23 Cluster-Server-1 attrd_updater: [51636]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:23:23 Cluster-Server-1 attrd_updater: [51636]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:23:23 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:23:23 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:23:23 Cluster-Server-1 lrmd: [48712]: info: operation monitor[6] on p_PingD:0 for client 48715: pid 51618 exited with return code 0
Sep 10 15:23:23 Cluster-Server-2 attrd_updater: [50863]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:23:23 Cluster-Server-2 attrd_updater: [50863]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:23:23 Cluster-Server-2 attrd_updater: [50863]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:23:23 Cluster-Server-2 attrd_updater: [50863]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:23:23 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:23:23 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:23:23 Cluster-Server-2 lrmd: [40194]: info: operation monitor[6] on p_PingD:1 for client 40197: pid 50664 exited with return code 0
Sep 10 15:23:27 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_Device_drive:0 monitor[11] (pid 51637)
drbd(p_Device_drive:0)[51637]:	2012/09/10_15:23:27 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:23:27 Cluster-Server-1 crm_attribute: [51667]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:23:27 Cluster-Server-1 crm_attribute: [51667]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:23:27 Cluster-Server-1 crm_attribute: [51667]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:23:27 Cluster-Server-1 crm_attribute: [51667]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:23:27 Cluster-Server-1 crm_attribute: [51667]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:23:27 Cluster-Server-1 crm_attribute: [51667]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:23:27 Cluster-Server-1 crm_attribute: [51667]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:23:27 Cluster-Server-1 crm_attribute: [51667]: info: determine_host: Mapped Cluster-Server-1 to Cluster-Server-1
Sep 10 15:23:27 Cluster-Server-1 crm_attribute: [51667]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:23:27 Cluster-Server-1 crm_attribute: [51667]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:23:27 Cluster-Server-1 crm_attribute: [51667]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:23:27 Cluster-Server-1 crm_attribute: [51667]: info: attrd_lazy_update: Updated master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:23:27 Cluster-Server-1 crm_attribute: [51667]: info: main: Update master-p_Device_drive:0=10000 sent via attrd
Sep 10 15:23:27 Cluster-Server-1 crm_attribute: [51667]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:23:27 Cluster-Server-1 crm_attribute: [51667]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:23:27 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:0=10000
Sep 10 15:23:27 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:0)[51637]:	2012/09/10_15:23:27 DEBUG: drive: Exit code 0
drbd(p_Device_drive:0)[51637]:	2012/09/10_15:23:27 DEBUG: drive: Command output: 
Sep 10 15:23:27 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_Device_drive:0:monitor:stdout) 

Sep 10 15:23:27 Cluster-Server-1 lrmd: [48712]: info: operation monitor[11] on p_Device_drive:0 for client 48715: pid 51637 exited with return code 8
Sep 10 15:23:27 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_Device_drive:1 monitor[11] (pid 51145)
drbd(p_Device_drive:1)[51145]:	2012/09/10_15:23:27 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:23:27 Cluster-Server-2 crm_attribute: [51175]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:23:27 Cluster-Server-2 crm_attribute: [51175]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:23:27 Cluster-Server-2 crm_attribute: [51175]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:23:27 Cluster-Server-2 crm_attribute: [51175]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:23:27 Cluster-Server-2 crm_attribute: [51175]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:23:27 Cluster-Server-2 crm_attribute: [51175]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:23:27 Cluster-Server-2 crm_attribute: [51175]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:23:27 Cluster-Server-2 crm_attribute: [51175]: info: determine_host: Mapped Cluster-Server-2 to Cluster-Server-2
Sep 10 15:23:27 Cluster-Server-2 crm_attribute: [51175]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:23:27 Cluster-Server-2 crm_attribute: [51175]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:23:27 Cluster-Server-2 crm_attribute: [51175]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:1=10000 for Cluster-Server-2
Sep 10 15:23:27 Cluster-Server-2 crm_attribute: [51175]: info: attrd_lazy_update: Updated master-p_Device_drive:1=10000 for Cluster-Server-2
Sep 10 15:23:27 Cluster-Server-2 crm_attribute: [51175]: info: main: Update master-p_Device_drive:1=10000 sent via attrd
Sep 10 15:23:27 Cluster-Server-2 crm_attribute: [51175]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:23:27 Cluster-Server-2 crm_attribute: [51175]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:23:27 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:1=10000
Sep 10 15:23:27 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:1)[51145]:	2012/09/10_15:23:27 DEBUG: drive: Exit code 0
drbd(p_Device_drive:1)[51145]:	2012/09/10_15:23:27 DEBUG: drive: Command output: 
Sep 10 15:23:27 Cluster-Server-2 lrmd: [40194]: debug: RA output: (p_Device_drive:1:monitor:stdout) 

Sep 10 15:23:27 Cluster-Server-2 lrmd: [40194]: info: operation monitor[11] on p_Device_drive:1 for client 40197: pid 51145 exited with return code 0
Sep 10 15:23:33 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_PingD:0 monitor[6] (pid 51939)
Sep 10 15:23:33 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_PingD:1 monitor[6] (pid 51629)
Sep 10 15:23:35 Cluster-Server-1 attrd_updater: [51957]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:23:35 Cluster-Server-1 attrd_updater: [51957]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:23:35 Cluster-Server-1 attrd_updater: [51957]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:23:35 Cluster-Server-1 attrd_updater: [51957]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:23:35 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:23:35 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:23:35 Cluster-Server-1 lrmd: [48712]: info: operation monitor[6] on p_PingD:0 for client 48715: pid 51939 exited with return code 0
Sep 10 15:23:35 Cluster-Server-2 attrd_updater: [51894]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:23:35 Cluster-Server-2 attrd_updater: [51894]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:23:35 Cluster-Server-2 attrd_updater: [51894]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:23:35 Cluster-Server-2 attrd_updater: [51894]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:23:35 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:23:35 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:23:35 Cluster-Server-2 lrmd: [40194]: info: operation monitor[6] on p_PingD:1 for client 40197: pid 51629 exited with return code 0
Sep 10 15:23:37 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_Device_drive:0 monitor[11] (pid 51958)
drbd(p_Device_drive:0)[51958]:	2012/09/10_15:23:37 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:23:37 Cluster-Server-1 crm_attribute: [51988]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:23:37 Cluster-Server-1 crm_attribute: [51988]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:23:37 Cluster-Server-1 crm_attribute: [51988]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:23:37 Cluster-Server-1 crm_attribute: [51988]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:23:37 Cluster-Server-1 crm_attribute: [51988]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:23:37 Cluster-Server-1 crm_attribute: [51988]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:23:37 Cluster-Server-1 crm_attribute: [51988]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:23:37 Cluster-Server-1 crm_attribute: [51988]: info: determine_host: Mapped Cluster-Server-1 to Cluster-Server-1
Sep 10 15:23:37 Cluster-Server-1 crm_attribute: [51988]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:23:37 Cluster-Server-1 crm_attribute: [51988]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:23:37 Cluster-Server-1 crm_attribute: [51988]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:23:37 Cluster-Server-1 crm_attribute: [51988]: info: attrd_lazy_update: Updated master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:23:37 Cluster-Server-1 crm_attribute: [51988]: info: main: Update master-p_Device_drive:0=10000 sent via attrd
Sep 10 15:23:37 Cluster-Server-1 crm_attribute: [51988]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:23:37 Cluster-Server-1 crm_attribute: [51988]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:23:37 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:0=10000
Sep 10 15:23:37 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:0)[51958]:	2012/09/10_15:23:37 DEBUG: drive: Exit code 0
drbd(p_Device_drive:0)[51958]:	2012/09/10_15:23:37 DEBUG: drive: Command output: 
Sep 10 15:23:37 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_Device_drive:0:monitor:stdout) 

Sep 10 15:23:37 Cluster-Server-1 lrmd: [48712]: info: operation monitor[11] on p_Device_drive:0 for client 48715: pid 51958 exited with return code 8
Sep 10 15:23:42 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [52235] registered
Sep 10 15:23:42 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:52235] disconnected.
Sep 10 15:23:42 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:52235] is unregistered
Sep 10 15:23:42 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [52237] registered
Sep 10 15:23:42 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:52237] disconnected.
Sep 10 15:23:42 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:52237] is unregistered
Sep 10 15:23:42 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [52239] registered
Sep 10 15:23:42 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:52239] disconnected.
Sep 10 15:23:42 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:52239] is unregistered
Sep 10 15:23:42 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [52241] registered
Sep 10 15:23:42 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:52241] disconnected.
Sep 10 15:23:42 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:52241] is unregistered
Sep 10 15:23:42 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [52250] registered
Sep 10 15:23:42 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:52250] disconnected.
Sep 10 15:23:42 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:52250] is unregistered
Sep 10 15:23:42 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [52259] registered
Sep 10 15:23:43 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:52259] disconnected.
Sep 10 15:23:43 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:52259] is unregistered
Sep 10 15:23:43 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [52266] registered
Sep 10 15:23:43 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:52266] disconnected.
Sep 10 15:23:43 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:52266] is unregistered
Sep 10 15:23:43 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [52273] registered
Sep 10 15:23:43 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:52273] disconnected.
Sep 10 15:23:43 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:52273] is unregistered
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: crm_uptime: Current CPU usage is: 0s, 40000us
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: crm_compare_age: Loose: 40000 vs 170000 (usec)
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: info: do_election_count_vote: Election 7 (owner: Cluster-Server-2) lost: vote from Cluster-Server-2 (Uptime)
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: update_dc: Unset DC. Was Cluster-Server-2
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: do_election_check: Ignore election check: we not in an election
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_PENDING: [ state=S_NOT_DC cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: do_log: FSA: Input I_PENDING from do_election_count_vote() received in state S_NOT_DC
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: notice: do_state_transition: State transition S_NOT_DC -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_START
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: crm_timer_start: Started Election Trigger (I_DC_TIMEOUT:20000ms), src=37
Sep 10 15:23:43 Cluster-Server-1 cib: [48709]: debug: activateCibXml: Triggering CIB write for cib_apply_diff op
Sep 10 15:23:43 Cluster-Server-1 cib: [48709]: debug: Forking temp process write_cib_contents
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: handle_request: Raising I_JOIN_OFFER: join-5
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_JOIN_OFFER: [ state=S_PENDING cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_REQUEST
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: info: update_dc: Set DC to Cluster-Server-2 (3.0.6)
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: do_cl_join_offer_respond: do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:23:43 Cluster-Server-1 cib: [48709]: info: Managed write_cib_contents process 52285 exited with return code 0.
Sep 10 15:23:43 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='master-p_Device_drive:1'] does not exist
Sep 10 15:23:43 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update -22 for master-p_Device_drive:1=(null) passed
Sep 10 15:23:43 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:23:43 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-probe_complete" name="probe_complete" value="true" />
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: join_query_callback: Respond to join offer join-5
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: join_query_callback: Acknowledging Cluster-Server-2 as our DC
Sep 10 15:23:43 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:23:43 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-pingd" name="pingd" value="100" />
Sep 10 15:23:43 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 43 for probe_complete=true passed
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: handle_request: Raising I_JOIN_RESULT: join-5
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_PENDING cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: do_cl_join_finalize_respond: Confirming join join-5: join_ack_nack
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_NFS_Server:0 after complete start op (interval=0)
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_NFS_Server:0 after complete monitor op (interval=30000)
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_PingD:0 after complete monitor op (interval=0)
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_PingD:0 after complete monitor op (interval=10000)
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce LVM_drive after complete monitor op (interval=0)
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_Device_drive:0 after complete monitor op (interval=0)
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_Device_drive:0 after complete monitor op (interval=10000)
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_iSCSI_Daemon:0 after complete monitor op (interval=0)
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_iSCSI_Daemon:0 after complete monitor op (interval=30000)
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: do_cl_join_finalize_respond: join-5: Join complete.  Sending local LRM status to Cluster-Server-2
Sep 10 15:23:43 Cluster-Server-1 attrd: [48713]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
Sep 10 15:23:43 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_Device_drive:0 (10000)
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: attrd_update_delegate: Sent update: (null)=(null) for localhost
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_NOT_DC: [ state=S_PENDING cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: do_log: FSA: Input I_NOT_DC from do_cl_join_finalize_respond() received in state S_PENDING
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: notice: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:23:43 Cluster-Server-1 cib: [48709]: debug: activateCibXml: Triggering CIB write for cib_replace op
Sep 10 15:23:43 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='master-p_Device_drive:0'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[3])
Sep 10 15:23:43 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-master-p_Device_drive.0" name="master-p_Device_drive:0" value="10000" />
Sep 10 15:23:43 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Sep 10 15:23:43 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:23:43 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-probe_complete" name="probe_complete" value="true" />
Sep 10 15:23:43 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: pingd (100)
Sep 10 15:23:43 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='master-p_Device_drive:1'] does not exist
Sep 10 15:23:43 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update -22 for master-p_Device_drive:1=(null) passed
Sep 10 15:23:43 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:23:43 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-probe_complete" name="probe_complete" value="true" />
Sep 10 15:23:43 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:23:43 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-pingd" name="pingd" value="100" />
Sep 10 15:23:43 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:23:43 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-pingd" name="pingd" value="100" />
Sep 10 15:23:43 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 45 for pingd=100 passed
Sep 10 15:23:43 Cluster-Server-1 cib: [48709]: debug: Forking temp process write_cib_contents
Sep 10 15:23:43 Cluster-Server-1 lrmd: [48712]: debug: on_msg_add_rsc:client [48715] adds resource Target_iscsi1
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=12:7:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=Target_iscsi1_monitor_0
Sep 10 15:23:43 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op:2399: copying parameters for rsc Target_iscsi1
Sep 10 15:23:43 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation monitor[12] on Target_iscsi1 for client 48715, its parameters: crm_feature_set=[3.0.6] iqn=[iqn.2005-07.com.example:vdisk.iscsi1] CRM_meta_timeout=[20000]  to the operation list.
Sep 10 15:23:43 Cluster-Server-1 lrmd: [48712]: info: rsc:Target_iscsi1 probe[12] (pid 52289)
Sep 10 15:23:43 Cluster-Server-1 lrmd: [48712]: debug: on_msg_add_rsc:client [48715] adds resource Lun_iscsi1
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=13:7:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=Lun_iscsi1_monitor_0
Sep 10 15:23:43 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op:2399: copying parameters for rsc Lun_iscsi1
Sep 10 15:23:43 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation monitor[13] on Lun_iscsi1 for client 48715, its parameters: path=[/dev/drive-CSD/iscsi1_iSCSI] crm_feature_set=[3.0.6] lun=[0] handler=[vdisk_blockio] device_name=[iscsi1] CRM_meta_timeout=[20000] target_iqn=[iqn.2005-07.com.example:vdisk.iscsi1]  to the operation list.
Sep 10 15:23:43 Cluster-Server-1 lrmd: [48712]: info: rsc:Lun_iscsi1 probe[13] (pid 52290)
Sep 10 15:23:43 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 47 for master-p_Device_drive:0=10000 passed
Sep 10 15:23:43 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 49 for probe_complete=true passed
Sep 10 15:23:43 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 52 for probe_complete=true passed
Sep 10 15:23:43 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 54 for pingd=100 passed
Sep 10 15:23:43 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 56 for pingd=100 passed
SCSTTarget(Target_iscsi1)[52289]:	2012/09/10_15:23:43 DEBUG: Target_iscsi1 monitor : 7
Sep 10 15:23:43 Cluster-Server-1 lrmd: [48712]: WARN: Managed Target_iscsi1:monitor process 52289 exited with return code 7.
Sep 10 15:23:43 Cluster-Server-1 lrmd: [48712]: info: operation monitor[12] on Target_iscsi1 for client 48715: pid 52289 exited with return code 7
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce Target_iscsi1 after complete monitor op (interval=0)
SCSTLun(Lun_iscsi1)[52290]:	2012/09/10_15:23:43 INFO: Lun_iscsi1 monitor : 7
Sep 10 15:23:43 Cluster-Server-1 cib: [48709]: info: Managed write_cib_contents process 52288 exited with return code 0.
SCSTLun(Lun_iscsi1)[52290]:	2012/09/10_15:23:43 INFO: Lun_iscsi1 monitor : 7
Sep 10 15:23:43 Cluster-Server-1 lrmd: [48712]: WARN: Managed Lun_iscsi1:monitor process 52290 exited with return code 7.
Sep 10 15:23:43 Cluster-Server-1 lrmd: [48712]: info: operation monitor[13] on Lun_iscsi1 for client 48715: pid 52290 exited with return code 7
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation Target_iscsi1_monitor_0 (call=12, rc=7, cib-update=31, confirmed=true) not running
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending monitor op to history for 'Target_iscsi1'
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce Lun_iscsi1 after complete monitor op (interval=0)
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation Lun_iscsi1_monitor_0 (call=13, rc=7, cib-update=32, confirmed=true) not running
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending monitor op to history for 'Lun_iscsi1'
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: attrd_update_delegate: Sent update: probe_complete=true for localhost
Sep 10 15:23:43 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crmd: probe_complete=true
Sep 10 15:23:43 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: true, Current: true, Stored: true
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=74:7:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=Target_iscsi1_start_0
Sep 10 15:23:43 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op:2399: copying parameters for rsc Target_iscsi1
Sep 10 15:23:43 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation start[14] on Target_iscsi1 for client 48715, its parameters: crm_feature_set=[3.0.6] CRM_meta_name=[start] iqn=[iqn.2005-07.com.example:vdisk.iscsi1] CRM_meta_timeout=[240000]  to the operation list.
Sep 10 15:23:43 Cluster-Server-1 lrmd: [48712]: info: rsc:Target_iscsi1 start[14] (pid 52315)
SCSTTarget(Target_iscsi1)[52315]:	2012/09/10_15:23:43 INFO: target iqn.2005-07.com.example:vdisk.iscsi1: Starting...
SCSTTarget(Target_iscsi1)[52315]:	2012/09/10_15:23:43 INFO: target iqn.2005-07.com.example:vdisk.iscsi1: Starting...
SCSTTarget(Target_iscsi1)[52315]:	2012/09/10_15:23:43 INFO: Enabling target iqn.2005-07.com.example:vdisk.iscsi1
SCSTTarget(Target_iscsi1)[52315]:	2012/09/10_15:23:43 INFO: Enabling target iqn.2005-07.com.example:vdisk.iscsi1
SCSTTarget(Target_iscsi1)[52315]:	2012/09/10_15:23:43 DEBUG: SCST target iqn.2005-07.com.example:vdisk.iscsi1: Started.
SCSTTarget(Target_iscsi1)[52315]:	2012/09/10_15:23:43 DEBUG: Target_iscsi1 start : 0
Sep 10 15:23:43 Cluster-Server-1 lrmd: [48712]: info: Managed Target_iscsi1:start process 52315 exited with return code 0.
Sep 10 15:23:43 Cluster-Server-1 lrmd: [48712]: info: operation start[14] on Target_iscsi1 for client 48715: pid 52315 exited with return code 0
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce Target_iscsi1 after complete start op (interval=0)
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation Target_iscsi1_start_0 (call=14, rc=0, cib-update=33, confirmed=true) ok
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending start op to history for 'Target_iscsi1'
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=75:7:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=Target_iscsi1_monitor_10000
Sep 10 15:23:43 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation monitor[15] on Target_iscsi1 for client 48715, its parameters: crm_feature_set=[3.0.6] CRM_meta_name=[monitor] iqn=[iqn.2005-07.com.example:vdisk.iscsi1] CRM_meta_timeout=[60000] CRM_meta_interval=[10000]  to the operation list.
Sep 10 15:23:43 Cluster-Server-1 lrmd: [48712]: info: rsc:Target_iscsi1 monitor[15] (pid 52335)
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=76:7:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=Lun_iscsi1_start_0
Sep 10 15:23:43 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op:2399: copying parameters for rsc Lun_iscsi1
Sep 10 15:23:43 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation start[16] on Lun_iscsi1 for client 48715, its parameters: handler=[vdisk_blockio] CRM_meta_timeout=[60000] CRM_meta_name=[start] target_iqn=[iqn.2005-07.com.example:vdisk.iscsi1] path=[/dev/drive-CSD/iscsi1_iSCSI] crm_feature_set=[3.0.6] lun=[0] device_name=[iscsi1]  to the operation list.
Sep 10 15:23:43 Cluster-Server-1 lrmd: [48712]: info: rsc:Lun_iscsi1 start[16] (pid 52336)
SCSTTarget(Target_iscsi1)[52335]:	2012/09/10_15:23:43 DEBUG: Target_iscsi1 monitor : 0
Sep 10 15:23:43 Cluster-Server-1 lrmd: [48712]: info: Managed Target_iscsi1:monitor process 52335 exited with return code 0.
Sep 10 15:23:43 Cluster-Server-1 lrmd: [48712]: info: operation monitor[15] on Target_iscsi1 for client 48715: pid 52335 exited with return code 0
SCSTLun(Lun_iscsi1)[52336]:	2012/09/10_15:23:43 INFO: Disabling target iqn.2005-07.com.example:vdisk.iscsi1
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce Target_iscsi1 after complete monitor op (interval=10000)
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation Target_iscsi1_monitor_10000 (call=15, rc=0, cib-update=34, confirmed=false) ok
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending monitor op to history for 'Target_iscsi1'
SCSTLun(Lun_iscsi1)[52336]:	2012/09/10_15:23:43 INFO: Disabling target iqn.2005-07.com.example:vdisk.iscsi1
SCSTLun(Lun_iscsi1)[52336]:	2012/09/10_15:23:43 INFO: Starting lun 0 on target iqn.2005-07.com.example:vdisk.iscsi1
SCSTLun(Lun_iscsi1)[52336]:	2012/09/10_15:23:43 INFO: Starting lun 0 on target iqn.2005-07.com.example:vdisk.iscsi1
SCSTLun(Lun_iscsi1)[52336]:	2012/09/10_15:23:43 INFO: Opening device iscsi1, target iqn.2005-07.com.example:vdisk.iscsi1
SCSTLun(Lun_iscsi1)[52336]:	2012/09/10_15:23:43 INFO: Opening device iscsi1, target iqn.2005-07.com.example:vdisk.iscsi1
SCSTLun(Lun_iscsi1)[52336]:	2012/09/10_15:23:43 INFO: Adding LUN 0, device iscsi1, target iqn.2005-07.com.example:vdisk.iscsi1
SCSTLun(Lun_iscsi1)[52336]:	2012/09/10_15:23:43 INFO: Adding LUN 0, device iscsi1, target iqn.2005-07.com.example:vdisk.iscsi1
SCSTLun(Lun_iscsi1)[52336]:	2012/09/10_15:23:43 INFO: Enabling target iqn.2005-07.com.example:vdisk.iscsi1
SCSTLun(Lun_iscsi1)[52336]:	2012/09/10_15:23:43 INFO: Enabling target iqn.2005-07.com.example:vdisk.iscsi1
SCSTLun(Lun_iscsi1)[52336]:	2012/09/10_15:23:43 INFO: Started lun 0 on target iqn.2005-07.com.example:vdisk.iscsi1
SCSTLun(Lun_iscsi1)[52336]:	2012/09/10_15:23:43 INFO: Started lun 0 on target iqn.2005-07.com.example:vdisk.iscsi1
SCSTLun(Lun_iscsi1)[52336]:	2012/09/10_15:23:43 INFO: Lun_iscsi1 start : 0
SCSTLun(Lun_iscsi1)[52336]:	2012/09/10_15:23:43 INFO: Lun_iscsi1 start : 0
Sep 10 15:23:43 Cluster-Server-1 lrmd: [48712]: info: Managed Lun_iscsi1:start process 52336 exited with return code 0.
Sep 10 15:23:43 Cluster-Server-1 lrmd: [48712]: info: operation start[16] on Lun_iscsi1 for client 48715: pid 52336 exited with return code 0
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce Lun_iscsi1 after complete start op (interval=0)
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation Lun_iscsi1_start_0 (call=16, rc=0, cib-update=35, confirmed=true) ok
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending start op to history for 'Lun_iscsi1'
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=77:7:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=Lun_iscsi1_monitor_10000
Sep 10 15:23:43 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation monitor[17] on Lun_iscsi1 for client 48715, its parameters: handler=[vdisk_blockio] CRM_meta_timeout=[20000] CRM_meta_name=[monitor] target_iqn=[iqn.2005-07.com.example:vdisk.iscsi1] path=[/dev/drive-CSD/iscsi1_iSCSI] crm_feature_set=[3.0.6] CRM_meta_interval=[10000] lun=[0] device_name=[iscsi1]  to the operation list.
Sep 10 15:23:43 Cluster-Server-1 lrmd: [48712]: info: rsc:Lun_iscsi1 monitor[17] (pid 52383)
SCSTLun(Lun_iscsi1)[52383]:	2012/09/10_15:23:43 INFO: Lun_iscsi1 monitor : 0
SCSTLun(Lun_iscsi1)[52383]:	2012/09/10_15:23:43 INFO: Lun_iscsi1 monitor : 0
Sep 10 15:23:43 Cluster-Server-1 lrmd: [48712]: info: Managed Lun_iscsi1:monitor process 52383 exited with return code 0.
Sep 10 15:23:43 Cluster-Server-1 lrmd: [48712]: info: operation monitor[17] on Lun_iscsi1 for client 48715: pid 52383 exited with return code 0
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce Lun_iscsi1 after complete monitor op (interval=10000)
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation Lun_iscsi1_monitor_10000 (call=17, rc=0, cib-update=36, confirmed=false) ok
Sep 10 15:23:43 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending monitor op to history for 'Lun_iscsi1'
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: debug: activateCibXml: Triggering CIB write for cib_replace op
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_replace): 0.7.36 -> 0.8.1 (S_IDLE)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.8.1) : Non-status change
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause <diff crm_feature_set="3.0.6" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   <diff-removed admin_epoch="0" epoch="7" num_updates="36" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     <cib admin_epoch="0" epoch="7" num_updates="36" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   </diff-removed>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   <diff-added >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     <cib epoch="8" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="Cluster-Server-1" update-client="cibadmin" cib-last-written="Mon Sep 10 15:22:06 2012" have-quorum="1" dc-uuid="Cluster-Server-2" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       <configuration >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         <resources >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <group id="iSCSI_iscsi1" __crm_diff_marker__="added:top" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <primitive class="ocf" id="Target_iscsi1" provider="nas" type="SCSTTarget" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <instance_attributes id="Target_iscsi1-instance_attributes" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="Target_iscsi1-instance_attributes-iqn" name="iqn" value="iqn.2005-07.com.example:vdisk.iscsi1" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </instance_attributes>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <operations >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="Target_iscsi1-monitor-10" interval="10" name="monitor" timeout="60" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="Target_iscsi1-start-0" interval="0" name="start" timeout="240" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="Target_iscsi1-stop-0" interval="0" name="stop" timeout="240" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </operations>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <meta_attributes id="Target_iscsi1-meta_attributes" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="Target_iscsi1-meta_attributes-target-role" name="target-role" value="Started" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </meta_attributes>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             </primitive>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <primitive class="ocf" id="Lun_iscsi1" provider="nas" type="SCSTLun" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <instance_attributes id="Lun_iscsi1-instance_attributes" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="Lun_iscsi1-instance_attributes-target_iqn" name="target_iqn" value="iqn.2005-07.com.example:vdisk.iscsi1" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="Lun_iscsi1-instance_attributes-lun" name="lun" value="0" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="Lun_iscsi1-instance_attributes-path" name="path" value="/dev/drive-CSD/iscsi1_iSCSI" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="Lun_iscsi1-instance_attributes-device_name" name="device_name" value="iscsi1" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="Lun_iscsi1-instance_attributes-handler" name="handler" value="vdisk_blockio" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </instance_attributes>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <operations >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="Lun_iscsi1-monitor-10" interval="10" name="monitor" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="Lun_iscsi1-start-0" interval="0" name="start" timeout="60" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="Lun_iscsi1-stop-0" interval="0" name="stop" timeout="240" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </operations>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <meta_attributes id="Lun_iscsi1-meta_attributes" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="Lun_iscsi1-meta_attributes-target-role" name="target-role" value="Started" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </meta_attributes>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             </primitive>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           </group>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         </resources>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         <constraints >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <rsc_order first="LVM_drive" id="iSCSI_iscsi1_after_LVM_drive" score="INFINITY" then="iSCSI_iscsi1" __crm_diff_marker__="added:top" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <rsc_order first="iSCSI_Daemon" id="iSCSI_iscsi1_after_iSCSI_Daemon" score="INFINITY" then="iSCSI_iscsi1" __crm_diff_marker__="added:top" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <rsc_colocation id="iSCSI_iscsi1_with_LVM_drive" rsc="iSCSI_iscsi1" score="INFINITY" with-rsc="LVM_drive" __crm_diff_marker__="added:top" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <rsc_colocation id="iSCSI_iscsi1_with_iSCSI_Daemon" rsc="iSCSI_iscsi1" score="INFINITY" with-rsc="iSCSI_Daemon" __crm_diff_marker__="added:top" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         </constraints>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       </configuration>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     </cib>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   </diff-added>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause </diff>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_IDLE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes are eligible to run resources.
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 112: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib_replace_notify: Replaced: 0.7.36 -> 0.8.1 from Cluster-Server-1
Sep 10 15:23:43 Cluster-Server-2 attrd: [40195]: info: do_cib_replaced: Sending full refresh
Sep 10 15:23:43 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_Device_drive:1 (10000)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_cib_replaced: Updating the CIB after a replace: DC=true
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: - <cib admin_epoch="0" epoch="7" num_updates="36" />
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: + <cib epoch="8" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="Cluster-Server-1" update-client="cibadmin" cib-last-written="Mon Sep 10 15:22:06 2012" have-quorum="1" dc-uuid="Cluster-Server-2" >
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +   <configuration >
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +     <resources >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-1: true (overwrite=true) hash_size=2
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <group id="iSCSI_iscsi1" __crm_diff_marker__="added:top" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-2: true (overwrite=true) hash_size=2
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +         <primitive class="ocf" id="Target_iscsi1" provider="nas" type="SCSTTarget" >
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <instance_attributes id="Target_iscsi1-instance_attributes" >
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="Target_iscsi1-instance_attributes-iqn" name="iqn" value="iqn.2005-07.com.example:vdisk.iscsi1" />
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +           </instance_attributes>
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <operations >
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <op id="Target_iscsi1-monitor-10" interval="10" name="monitor" timeout="60" />
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <op id="Target_iscsi1-start-0" interval="0" name="start" timeout="240" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <op id="Target_iscsi1-stop-0" interval="0" name="stop" timeout="240" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +           </operations>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: update_dc: Unset DC. Was Cluster-Server-2
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <meta_attributes id="Target_iscsi1-meta_attributes" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="Target_iscsi1-meta_attributes-target-role" name="target-role" value="Started" />
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +           </meta_attributes>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +         </primitive>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +         <primitive class="ocf" id="Lun_iscsi1" provider="nas" type="SCSTLun" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <instance_attributes id="Lun_iscsi1-instance_attributes" >
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="Lun_iscsi1-instance_attributes-target_iqn" name="target_iqn" value="iqn.2005-07.com.example:vdisk.iscsi1" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: crm_uptime: Current CPU usage is: 0s, 170000us
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="Lun_iscsi1-instance_attributes-lun" name="lun" value="0" />
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="Lun_iscsi1-instance_attributes-path" name="path" value="/dev/drive-CSD/iscsi1_iSCSI" />
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="Lun_iscsi1-instance_attributes-device_name" name="device_name" value="iscsi1" />
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="Lun_iscsi1-instance_attributes-handler" name="handler" value="vdisk_blockio" />
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +           </instance_attributes>
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <operations >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_election_vote: Started election 7
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <op id="Lun_iscsi1-monitor-10" interval="10" name="monitor" />
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <op id="Lun_iscsi1-start-0" interval="0" name="start" timeout="60" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started Election Timeout (I_ELECTION_DC:120000ms), src=139
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <op id="Lun_iscsi1-stop-0" interval="0" name="stop" timeout="240" />
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +           </operations>
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <meta_attributes id="Lun_iscsi1-meta_attributes" >
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="Lun_iscsi1-meta_attributes-target-role" name="target-role" value="Started" />
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +           </meta_attributes>
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +         </primitive>
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +       </group>
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +     </resources>
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +     <constraints >
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <rsc_order first="LVM_drive" id="iSCSI_iscsi1_after_LVM_drive" score="INFINITY" then="iSCSI_iscsi1" __crm_diff_marker__="added:top" />
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <rsc_order first="iSCSI_Daemon" id="iSCSI_iscsi1_after_iSCSI_Daemon" score="INFINITY" then="iSCSI_iscsi1" __crm_diff_marker__="added:top" />
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <rsc_colocation id="iSCSI_iscsi1_with_LVM_drive" rsc="iSCSI_iscsi1" score="INFINITY" with-rsc="LVM_drive" __crm_diff_marker__="added:top" />
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <rsc_colocation id="iSCSI_iscsi1_with_iSCSI_Daemon" rsc="iSCSI_iscsi1" score="INFINITY" with-rsc="iSCSI_Daemon" __crm_diff_marker__="added:top" />
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +     </constraints>
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: +   </configuration>
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib:diff: + </cib>
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=Cluster-Server-1/cibadmin/2, version=0.8.1): ok (rc=0)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_election_count_vote: Created voted hash
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: crm_uptime: Current CPU usage is: 0s, 170000us
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_election_count_vote: Election 7 (current: 7, owner: Cluster-Server-2): Processed vote from Cluster-Server-2 (Recorded)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: crm_uptime: Current CPU usage is: 0s, 170000us
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: crm_compare_age: Win: 170000 vs 0  (usec)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_election_count_vote: Election 7 (current: 7, owner: Cluster-Server-2): Processed no-vote from Cluster-Server-1 (Recorded)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_election_check: Destroying voted hash
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_ELECTION_DC: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_election_check ]
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_ELECTION_DC from do_election_check() received in state S_ELECTION
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_START
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_te_control: The transitioner is already active
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_START
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pengine
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_START
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started Integration Timer (I_INTEGRATED:180000ms), src=141
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TAKEOVER
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: info: do_dc_takeover: Taking over DC status for this partition
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/113, version=0.8.2): ok (rc=0)
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: debug: cib_process_readwrite: We are still in R/W mode
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/115, version=0.8.4): ok (rc=0)
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/116, version=0.8.5): ok (rc=0)
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='master-p_Device_drive:1'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[3])
Sep 10 15:23:43 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-master-p_Device_drive.1" name="master-p_Device_drive:1" value="10000" />
Sep 10 15:23:43 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:23:43 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-probe_complete" name="probe_complete" value="true" />
Sep 10 15:23:43 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: pingd (100)
Sep 10 15:23:43 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 41 for master-p_Device_drive:1=10000 passed
Sep 10 15:23:43 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 43 for probe_complete=true passed
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//cluster_property_set//nvpair[@name='dc-version'] (/cib/configuration/crm_config/cluster_property_set/nvpair[1])
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: debug: Forking temp process write_cib_contents
Sep 10 15:23:43 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-pingd" name="pingd" value="100" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: find_nvpair_attr_delegate: Match <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff" />
Sep 10 15:23:43 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 45 for pingd=100 passed
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/118, version=0.8.9): ok (rc=0)
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//cluster_property_set//nvpair[@name='cluster-infrastructure'] (/cib/configuration/crm_config/cluster_property_set/nvpair[2])
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: find_nvpair_attr_delegate: Match <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="cman" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_OFFER_ALL
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: initialize_join: join-5: Initializing join data (flag=true)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: join_make_offer: join-5: Sending offer to Cluster-Server-1
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: join_make_offer: join-5: Sending offer to Cluster-Server-2
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: info: do_dc_join_offer_all: join-5: Waiting on 2 outstanding join acks
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke_callback: Discarding PE request in state: S_INTEGRATION
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: handle_request: Raising I_JOIN_OFFER: join-5
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_OFFER: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_REQUEST
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: info: update_dc: Set DC to Cluster-Server-2 (3.0.6)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_cl_join_offer_respond: do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/120, version=0.8.10): ok (rc=0)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Call 121 : Parsing CIB options
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Shutdown escalation occurs after: 1200000ms
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Checking for expired actions every 900000ms
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: join_query_callback: Respond to join offer join-5
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: join_query_callback: Acknowledging Cluster-Server-2 as our DC
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_REQ
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: Processing req from Cluster-Server-2
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: join-5: Welcoming node Cluster-Server-2 (ref join_request-crmd-1347283423-67)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: 1 nodes have been integrated into join-5
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: join-5: Still waiting on 1 outstanding offers
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: Managed write_cib_contents process 52483 exited with return code 0.
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_REQ
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: Processing req from Cluster-Server-1
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: join-5: Welcoming node Cluster-Server-1 (ref join_request-crmd-1347283423-16)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: 2 nodes have been integrated into join-5
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: check_join_state: join-5: Integration of 2 peers complete: do_dc_join_filter_offer
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_INTEGRATED: [ state=S_INTEGRATION cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes responded to the join offer.
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_START
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started Finalization Timer (I_ELECTION:1800000ms), src=145
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_FINALIZE
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_finalize: Finializing join-5 for 2 clients
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: info: do_dc_join_finalize: join-5: Syncing the CIB from Cluster-Server-2 to the rest of the cluster
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: debug: sync_our_cib: Syncing CIB to all peers
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/123, version=0.8.12): ok (rc=0)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by finalize_sync_callback in state: S_FINALIZE_JOIN
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: check_join_state: join-5: Still waiting on 2 integrated nodes
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: finalize_sync_callback: Notifying 2 clients of join-5 results
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: finalize_join_for: join-5: ACK'ing join request from Cluster-Server-1, state member
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: finalize_join_for: join-5: ACK'ing join request from Cluster-Server-2, state member
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/124, version=0.8.13): ok (rc=0)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: handle_request: Raising I_JOIN_RESULT: join-5
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_cl_join_finalize_respond: Confirming join join-5: join_ack_nack
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_Device_drive:1 after complete monitor op (interval=0)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_Device_drive:1 after complete monitor op (interval=20000)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_NFS_Server:1 after complete start op (interval=0)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_NFS_Server:1 after complete monitor op (interval=30000)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_PingD:1 after complete monitor op (interval=0)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_PingD:1 after complete monitor op (interval=10000)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce LVM_drive after complete monitor op (interval=0)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_iSCSI_Daemon:1 after complete monitor op (interval=0)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_iSCSI_Daemon:1 after complete monitor op (interval=30000)
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/125, version=0.8.14): ok (rc=0)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_cl_join_finalize_respond: join-5: Join complete.  Sending local LRM status to Cluster-Server-2
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_ack: Ignoring op=join_ack_nack message from Cluster-Server-2
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: info: do_dc_join_ack: join-5: Updating node state to member for Cluster-Server-1
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: info: erase_status_tag: Deleting xpath: //node_state[@uname='Cluster-Server-1']/lrm
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_ack: join-5: Registered callback for LRM update 127
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: info: do_dc_join_ack: join-5: Updating node state to member for Cluster-Server-2
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: info: erase_status_tag: Deleting xpath: //node_state[@uname='Cluster-Server-2']/lrm
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_ack: join-5: Registered callback for LRM update 129
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-1']/lrm (/cib/status/node_state[1]/lrm)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-1']/lrm (origin=local/crmd/126, version=0.8.15): ok (rc=0)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: erase_xpath_callback: Deletion of "//node_state[@uname='Cluster-Server-1']/lrm": ok (rc=0)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: join_update_complete_callback: Join update 127 complete
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by join_update_complete_callback in state: S_FINALIZE_JOIN
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: check_join_state: join-5 complete: join_update_complete_callback
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_FINALIZED: [ state=S_FINALIZE_JOIN cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-1: true (overwrite=true) hash_size=2
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-2: true (overwrite=true) hash_size=2
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes are eligible to run resources.
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_FINAL
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_final: Ensuring DC, quorum and node attributes are up-to-date
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: attrd_update_delegate: Sent update: (null)=(null) for localhost
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: crm_update_quorum: Updating quorum status to true (call=132)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_CANCEL
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_te_invoke: Cancelling the transition: inactive
Sep 10 15:23:43 Cluster-Server-2 attrd: [40195]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: do_te_invoke:162 - Triggered transition abort (complete=1) : Peer Cancelled
Sep 10 15:23:43 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_Device_drive:1 (10000)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 133: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-2']/lrm (/cib/status/node_state[2]/lrm)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_delete): 0.8.16 -> 0.8.17 (S_POLICY_ENGINE)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: No match for deleted action //diff-added//cib//lrm_rsc_op[@id='p_NFS_Server:1_last_0'] (p_NFS_Server:1_last_0 on Cluster-Server-2)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:320 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=p_NFS_Server:1_last_0, magic=0:0;7:3:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66, cib=0.8.17) : Resource op removal
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 134: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-2']/lrm (origin=local/crmd/128, version=0.8.17): ok (rc=0)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: erase_xpath_callback: Deletion of "//node_state[@uname='Cluster-Server-2']/lrm": ok (rc=0)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.8.17 -> 0.8.18 (S_POLICY_ENGINE)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Detected LRM refresh - 5 resources updated: Skipping all resource events
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:276 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.8.18) : LRM Refresh
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause <diff crm_feature_set="3.0.6" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   <diff-removed admin_epoch="0" epoch="8" num_updates="17" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     <cib num_updates="17" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   </diff-removed>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   <diff-added >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     <cib epoch="8" num_updates="18" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="Cluster-Server-1" update-client="cibadmin" cib-last-written="Mon Sep 10 15:23:43 2012" have-quorum="1" dc-uuid="Cluster-Server-2" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       <status >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         <node_state id="Cluster-Server-2" uname="Cluster-Server-2" ha="active" in_ccm="true" crmd="online" join="member" expected="member" crm-debug-origin="do_lrm_query" shutdown="0" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <lrm id="Cluster-Server-2" __crm_diff_marker__="added:top" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <lrm_resources >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <lrm_resource id="p_Device_drive:1" type="drbd" class="ocf" provider="linbit" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <lrm_rsc_op id="p_Device_drive:1_last_failure_0" operation_key="p_Device_drive:1_monitor_0" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.6" transition-key="13:5:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" transition-magic="0:0;13:5:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" call-id="9" rc-code="0" op-status="0" interval="0" op-digest="dc5cb13689611f4ed203745ed603621e" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <lrm_rsc_op id="p_Device_drive:1_monitor_20000" operation_key="p_Device_drive:1_monitor_20000" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.6" transition-key="43:6:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" transition-magic="0:0;43:6:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" call-id="11" rc-code="0" op-status="0" interval="20000" op-digest="5d09870d493985952cc6e27d86f5ff38" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </lrm_resource>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <lrm_resource id="p_NFS_Server:1" type="nfs-kernel-server" class="lsb" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <lrm_rsc_op id="p_NFS_Server:1_last_0" operation_key="p_NFS_Server:1_start_0" operation="start" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.6" transition-key="7:3:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" transition-magic="0:0;7:3:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" call-id="7" rc-code="0" op-status="0" interval="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <lrm_rsc_op id="p_NFS_Server:1_monitor_30000" operation_key="p_NFS_Server:1_monitor_30000" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.6" transition-key="8:3:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" transition-magic="0:0;8:3:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" call-id="8" rc-code="0" op-status="0" interval="30000" op-digest="4811cef7f7f94e3a35a70be7916cb2fd" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </lrm_resource>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <lrm_resource id="p_PingD:1" type="ping" class="ocf" provider="pacemaker" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <lrm_rsc_op id="p_PingD:1_last_failure_0" operation_key="p_PingD:1_monitor_0" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.6" transition-key="10:2:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" transition-magic="0:0;10:2:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" call-id="4" rc-code="0" op-status="0" interval="0" op-digest="e746ac7936e48a80d701184bf3591d18" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <lrm_rsc_op id="p_PingD:1_monitor_10000" operation_key="p_PingD:1_monitor_10000" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.6" transition-key="28:3:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" transition-magic="0:0;28:3:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" call-id="6" rc-code="0" op-status="0" interval="10000" op-digest="4cbd9d437c5ab81b1238d21071f3920b" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </lrm_resource>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <lrm_resource id="LVM_drive" type="LVM2" class="ocf" provider="nas" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <lrm_rsc_op id="LVM_drive_last_0" operation_key="LVM_drive_monitor_0" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.6" transition-key="14:5:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" transition-magic="0:7;14:5:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" call-id="10" rc-code="7" op-status="0" interval="0" op-digest="3de128da75b456c2b9e6a8229db6b5e9" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </lrm_resource>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <lrm_resource id="p_iSCSI_Daemon:1" type="iscsi-scst" class="lsb" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <lrm_rsc_op id="p_iSCSI_Daemon:1_last_failure_0" operation_key="p_iSCSI_Daemon:1_monitor_0" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.6" transition-key="9:2:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" transition-magic="0:0;9:2:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" call-id="3" rc-code="0" op-status="0" interval="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <lrm_rsc_op id="p_iSCSI_Daemon:1_monitor_30000" operation_key="p_iSCSI_Daemon:1_monitor_30000" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.6" transition-key="18:3:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" transition-magic="0:0;18:3:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" call-id="5" rc-code="0" op-status="0" interval="30000" op-digest="4811cef7f7f94e3a35a70be7916cb2fd" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </lrm_resource>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             </lrm_resources>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           </lrm>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         </node_state>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       </status>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     </cib>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   </diff-added>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause </diff>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 135: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: join_update_complete_callback: Join update 129 complete
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by join_update_complete_callback in state: S_POLICY_ENGINE
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.8.18 -> 0.8.19 (S_POLICY_ENGINE)
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/130, version=0.8.19): ok (rc=0)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.8.19 -> 0.8.20 (S_POLICY_ENGINE)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.8.20 -> 0.8.21 (S_POLICY_ENGINE)
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/132, version=0.8.21): ok (rc=0)
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='master-p_Device_drive:1'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[3])
Sep 10 15:23:43 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-master-p_Device_drive.1" name="master-p_Device_drive:1" value="10000" />
Sep 10 15:23:43 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:23:43 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-probe_complete" name="probe_complete" value="true" />
Sep 10 15:23:43 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: pingd (100)
Sep 10 15:23:43 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 47 for master-p_Device_drive:1=10000 passed
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: info: unpack_config: Startup probes: enabled
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH timeout: 60000
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH of failed nodes is disabled
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Stop all active resources: false
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Default stickiness: 0
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: notice: unpack_config: On loss of CCM Quorum: Ignore
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: info: unpack_domains: Unpacking domains
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-1 is online
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-2 is online
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: LVM_drive_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource LVM_drive active on Cluster-Server-1
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_Device_drive:0_last_failure_0 on Cluster-Server-1 returned 8 (master) instead of the expected value: 7 (not running)
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Operation monitor found resource p_Device_drive:0 active in master mode on Cluster-Server-1
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_Device_drive:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:23:43 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-pingd" name="pingd" value="100" />
Sep 10 15:23:43 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 49 for probe_complete=true passed
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke_callback: Invoking the PE: query=135, ref=pe_calc-dc-1347283423-71, seq=312, quorate=1
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: NFS_Server [p_NFS_Server]
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: iSCSI_Daemon [p_iSCSI_Daemon]
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: PingD [p_PingD]
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.8.21 -> 0.8.22 (S_POLICY_ENGINE)
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:23:43 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 51 for pingd=100 passed
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: info: clone_print:  Master/Slave Set: Device_drive [p_Device_drive]
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:0 active on Cluster-Server-1
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:0 active on Cluster-Server-1
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: info: short_print:      Masters: [ Cluster-Server-1 ]
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: info: short_print:      Slaves: [ Cluster-Server-2 ]
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: info: native_print: LVM_drive	(ocf::nas:LVM2):	Started Cluster-Server-1
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: iSCSI_iscsi1
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: info: native_print:      Target_iscsi1	(ocf::nas:SCSTTarget):	Stopped 
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: info: native_print:      Lun_iscsi1	(ocf::nas:SCSTLun):	Stopped 
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.8.22 -> 0.8.23 (S_POLICY_ENGINE)
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:0: preferring current location (node=Cluster-Server-1, weight=1)
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:1: preferring current location (node=Cluster-Server-2, weight=1)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.8.23 -> 0.8.24 (S_POLICY_ENGINE)
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_NFS_Server:0
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_NFS_Server:1
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 NFS_Server instances of a possible 2
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_iSCSI_Daemon:0
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_iSCSI_Daemon:1
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 iSCSI_Daemon instances of a possible 2
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_PingD:0
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_PingD:1
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 PingD instances of a possible 2
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_Device_drive:0
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_Device_drive:1
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 Device_drive instances of a possible 2
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:0 master score: 10150
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: info: master_color: Promoting p_Device_drive:0 (Master Cluster-Server-1)
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:1 master score: 10100
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: info: master_color: Device_drive: Promoted 1 instances of a possible 1 to master
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to LVM_drive
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Target_iscsi1
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Lun_iscsi1
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: native_create_probe: Probing Target_iscsi1 on Cluster-Server-1 (Stopped)
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: native_create_probe: Probing Lun_iscsi1 on Cluster-Server-1 (Stopped)
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: native_create_probe: Probing Target_iscsi1 on Cluster-Server-2 (Stopped)
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: native_create_probe: Probing Lun_iscsi1 on Cluster-Server-2 (Stopped)
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: debug: master_create_actions: Creating actions for Device_drive
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: info: RecurringOp:  Start recurring monitor (10s) for Target_iscsi1 on Cluster-Server-1
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: info: RecurringOp:  Start recurring monitor (10s) for Lun_iscsi1 on Cluster-Server-1
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:0	(Started Cluster-Server-1)
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:1	(Started Cluster-Server-2)
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:0	(Started Cluster-Server-1)
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:1	(Started Cluster-Server-2)
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:0	(Started Cluster-Server-1)
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:1	(Started Cluster-Server-2)
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_Device_drive:0	(Master Cluster-Server-1)
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_Device_drive:1	(Slave Cluster-Server-2)
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   LVM_drive	(Started Cluster-Server-1)
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: notice: LogActions: Start   Target_iscsi1	(Cluster-Server-1)
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: notice: LogActions: Start   Lun_iscsi1	(Cluster-Server-1)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_PE_SUCCESS from handle_response() received in state S_POLICY_ENGINE
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log <create_reply_adv origin="process_pe_message" t="crmd" version="3.0.6" subt="response" reference="pe_calc-dc-1347283423-71" crm_task="pe_calc" crm_sys_to="dc" crm_sys_from="pengine" crm-tgraph-in="/var/lib/pengine/pe-input-7.bz2" graph-errors="0" graph-warnings="0" config-errors="0" config-warnings="0" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   <crm_xml >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log     <transition_graph cluster-delay="60s" stonith-timeout="60s" failed-stop-offset="INFINITY" failed-start-offset="INFINITY" batch-limit="30" transition_id="7" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="0" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="79" operation="running" operation_key="iSCSI_iscsi1_running_0" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="74" operation="start" operation_key="Target_iscsi1_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="76" operation="start" operation_key="Lun_iscsi1_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="78" operation="start" operation_key="iSCSI_iscsi1_start_0" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="1" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="78" operation="start" operation_key="iSCSI_iscsi1_start_0" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="2" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="75" operation="monitor" operation_key="Target_iscsi1_monitor_10000" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="Target_iscsi1" long-id="iSCSI_iscsi1:Target_iscsi1" class="ocf" provider="nas" type="SCSTTarget" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_interval="10000" CRM_meta_name="monitor" CRM_meta_timeout="60000" crm_feature_set="3.0.6" iqn="iqn.2005-07.com.example:vdisk.iscsi1" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="74" operation="start" operation_key="Target_iscsi1_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="3" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="74" operation="start" operation_key="Target_iscsi1_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="Target_iscsi1" long-id="iSCSI_iscsi1:Target_iscsi1" class="ocf" provider="nas" type="SCSTTarget" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_name="start" CRM_meta_timeout="240000" crm_feature_set="3.0.6" iqn="iqn.2005-07.com.example:vdisk.iscsi1" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="10" operation="probe_complete" operation_key="probe_complete" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="78" operation="start" operation_key="iSCSI_iscsi1_start_0" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="4" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="15" operation="monitor" operation_key="Target_iscsi1_monitor_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="Target_iscsi1" long-id="iSCSI_iscsi1:Target_iscsi1" class="ocf" provider="nas" type="SCSTTarget" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" crm_feature_set="3.0.6" iqn="iqn.2005-07.com.example:vdisk.iscsi1" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="5" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="12" operation="monitor" operation_key="Target_iscsi1_monitor_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="Target_iscsi1" long-id="iSCSI_iscsi1:Target_iscsi1" class="ocf" provider="nas" type="SCSTTarget" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" crm_feature_set="3.0.6" iqn="iqn.2005-07.com.example:vdisk.iscsi1" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="6" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="77" operation="monitor" operation_key="Lun_iscsi1_monitor_10000" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="Lun_iscsi1" long-id="iSCSI_iscsi1:Lun_iscsi1" class="ocf" provider="nas" type="SCSTLun" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_interval="10000" CRM_meta_name="monitor" CRM_meta_timeout="20000" crm_feature_set="3.0.6" device_name="iscsi1" handler="vdisk_blockio" lun="0" path="/dev/drive-CSD/iscsi1_iSCSI" target_iqn="iqn.2005-07.com.example:vdisk.iscsi1" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="76" operation="start" operation_key="Lun_iscsi1_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="7" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="76" operation="start" operation_key="Lun_iscsi1_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="Lun_iscsi1" long-id="iSCSI_iscsi1:Lun_iscsi1" class="ocf" provider="nas" type="SCSTLun" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_name="start" CRM_meta_timeout="60000" crm_feature_set="3.0.6" device_name="iscsi1" handler="vdisk_blockio" lun="0" path="/dev/drive-CSD/iscsi1_iSCSI" target_iqn="iqn.2005-07.com.example:vdisk.iscsi1" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="10" operation="probe_complete" operation_key="probe_complete" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="74" operation="start" operation_key="Target_iscsi1_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="78" operation="start" operation_key="iSCSI_iscsi1_start_0" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="8" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="16" operation="monitor" operation_key="Lun_iscsi1_monitor_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="Lun_iscsi1" long-id="iSCSI_iscsi1:Lun_iscsi1" class="ocf" provider="nas" type="SCSTLun" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" crm_feature_set="3.0.6" device_name="iscsi1" handler="vdisk_blockio" lun="0" path="/dev/drive-CSD/iscsi1_iSCSI" target_iqn="iqn.2005-07.com.example:vdisk.iscsi1" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="9" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="13" operation="monitor" operation_key="Lun_iscsi1_monitor_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="Lun_iscsi1" long-id="iSCSI_iscsi1:Lun_iscsi1" class="ocf" provider="nas" type="SCSTLun" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" crm_feature_set="3.0.6" device_name="iscsi1" handler="vdisk_blockio" lun="0" path="/dev/drive-CSD/iscsi1_iSCSI" target_iqn="iqn.2005-07.com.example:vdisk.iscsi1" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="10" priority="1000000" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="14" operation="probe_complete" operation_key="probe_complete" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_no_wait="true" crm_feature_set="3.0.6" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="15" operation="monitor" operation_key="Target_iscsi1_monitor_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="16" operation="monitor" operation_key="Lun_iscsi1_monitor_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="11" priority="1000000" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="11" operation="probe_complete" operation_key="probe_complete" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_no_wait="true" crm_feature_set="3.0.6" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="12" operation="monitor" operation_key="Target_iscsi1_monitor_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="13" operation="monitor" operation_key="Lun_iscsi1_monitor_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="12" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="10" operation="probe_complete" operation_key="probe_complete" >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes crm_feature_set="3.0.6" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="11" operation="probe_complete" operation_key="probe_complete" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="14" operation="probe_complete" operation_key="probe_complete" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log     </transition_graph>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   </crm_xml>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log </create_reply_adv>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_INVOKE
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: unpack_graph: Unpacked transition 7: 13 actions in 13 synapses
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: info: do_te_invoke: Processing graph 7 (ref=pe_calc-dc-1347283423-71) derived from /var/lib/pengine/pe-input-7.bz2
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: te_pseudo_action: Pseudo action 78 fired and confirmed
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 15: monitor Target_iscsi1_monitor_0 on Cluster-Server-2 (local)
Sep 10 15:23:43 Cluster-Server-2 lrmd: [40194]: debug: on_msg_add_rsc:client [40197] adds resource Target_iscsi1
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='master-p_Device_drive:0'] does not exist
Sep 10 15:23:43 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update -22 for master-p_Device_drive:0=(null) passed
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_lrm_rsc_op: Performing key=15:7:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=Target_iscsi1_monitor_0
Sep 10 15:23:43 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op:2399: copying parameters for rsc Target_iscsi1
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:23:43 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op: add an operation operation monitor[12] on Target_iscsi1 for client 40197, its parameters: crm_feature_set=[3.0.6] iqn=[iqn.2005-07.com.example:vdisk.iscsi1] CRM_meta_timeout=[20000]  to the operation list.
Sep 10 15:23:43 Cluster-Server-2 lrmd: [40194]: info: rsc:Target_iscsi1 probe[12] (pid 52484)
Sep 10 15:23:43 Cluster-Server-2 pengine: [40196]: notice: process_pe_message: Transition 7: PEngine Input stored in: /var/lib/pengine/pe-input-7.bz2
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 12: monitor Target_iscsi1_monitor_0 on Cluster-Server-1
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 16: monitor Lun_iscsi1_monitor_0 on Cluster-Server-2 (local)
Sep 10 15:23:43 Cluster-Server-2 lrmd: [40194]: debug: on_msg_add_rsc:client [40197] adds resource Lun_iscsi1
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_lrm_rsc_op: Performing key=16:7:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=Lun_iscsi1_monitor_0
Sep 10 15:23:43 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op:2399: copying parameters for rsc Lun_iscsi1
Sep 10 15:23:43 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op: add an operation operation monitor[13] on Lun_iscsi1 for client 40197, its parameters: path=[/dev/drive-CSD/iscsi1_iSCSI] crm_feature_set=[3.0.6] lun=[0] handler=[vdisk_blockio] device_name=[iscsi1] CRM_meta_timeout=[20000] target_iqn=[iqn.2005-07.com.example:vdisk.iscsi1]  to the operation list.
Sep 10 15:23:43 Cluster-Server-2 lrmd: [40194]: info: rsc:Lun_iscsi1 probe[13] (pid 52485)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 13: monitor Lun_iscsi1_monitor_0 on Cluster-Server-1
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 7 (Complete=0, Pending=4, Fired=5, Skipped=0, Incomplete=8, Source=/var/lib/pengine/pe-input-7.bz2): In-progress
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 7 (Complete=1, Pending=4, Fired=0, Skipped=0, Incomplete=8, Source=/var/lib/pengine/pe-input-7.bz2): In-progress
Sep 10 15:23:43 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-probe_complete" name="probe_complete" value="true" />
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.8.24 -> 0.8.25 (S_TRANSITION_ENGINE)
Sep 10 15:23:43 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:23:43 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-pingd" name="pingd" value="100" />
Sep 10 15:23:43 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 54 for probe_complete=true passed
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.8.25 -> 0.8.26 (S_TRANSITION_ENGINE)
Sep 10 15:23:43 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 56 for pingd=100 passed
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.8.26 -> 0.8.27 (S_TRANSITION_ENGINE)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.8.27 -> 0.8.28 (S_TRANSITION_ENGINE)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.8.28 -> 0.8.29 (S_TRANSITION_ENGINE)
SCSTTarget(Target_iscsi1)[52484]:	2012/09/10_15:23:43 DEBUG: Target_iscsi1 monitor : 7
Sep 10 15:23:43 Cluster-Server-2 lrmd: [40194]: WARN: Managed Target_iscsi1:monitor process 52484 exited with return code 7.
Sep 10 15:23:43 Cluster-Server-2 lrmd: [40194]: info: operation monitor[12] on Target_iscsi1 for client 40197: pid 52484 exited with return code 7
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: do_update_resource: Updating resouce Target_iscsi1 after complete monitor op (interval=0)
SCSTLun(Lun_iscsi1)[52485]:	2012/09/10_15:23:43 INFO: Lun_iscsi1 monitor : 7
SCSTLun(Lun_iscsi1)[52485]:	2012/09/10_15:23:43 INFO: Lun_iscsi1 monitor : 7
Sep 10 15:23:43 Cluster-Server-2 lrmd: [40194]: WARN: Managed Lun_iscsi1:monitor process 52485 exited with return code 7.
Sep 10 15:23:43 Cluster-Server-2 lrmd: [40194]: info: operation monitor[13] on Lun_iscsi1 for client 40197: pid 52485 exited with return code 7
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: info: process_lrm_event: LRM operation Target_iscsi1_monitor_0 (call=12, rc=7, cib-update=136, confirmed=true) not running
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: update_history_cache: Appending monitor op to history for 'Target_iscsi1'
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: do_update_resource: Updating resouce Lun_iscsi1 after complete monitor op (interval=0)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: info: process_lrm_event: LRM operation Lun_iscsi1_monitor_0 (call=13, rc=7, cib-update=137, confirmed=true) not running
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: update_history_cache: Appending monitor op to history for 'Lun_iscsi1'
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.8.29 -> 0.8.30 (S_TRANSITION_ENGINE)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.8.30 -> 0.8.31 (S_TRANSITION_ENGINE)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.8.31 -> 0.8.32 (S_TRANSITION_ENGINE)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action Target_iscsi1_monitor_0 (15) confirmed on Cluster-Server-2 (rc=0)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 7 (Complete=2, Pending=3, Fired=0, Skipped=0, Incomplete=8, Source=/var/lib/pengine/pe-input-7.bz2): In-progress
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.8.32 -> 0.8.33 (S_TRANSITION_ENGINE)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action Lun_iscsi1_monitor_0 (16) confirmed on Cluster-Server-2 (rc=0)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 14: probe_complete probe_complete on Cluster-Server-2 (local) - no waiting
Sep 10 15:23:43 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from crmd: probe_complete=true
Sep 10 15:23:43 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: true, Current: true, Stored: true
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: attrd_update_delegate: Sent update: probe_complete=true for localhost
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 7 (Complete=3, Pending=2, Fired=1, Skipped=0, Incomplete=7, Source=/var/lib/pengine/pe-input-7.bz2): In-progress
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 7 (Complete=4, Pending=2, Fired=0, Skipped=0, Incomplete=7, Source=/var/lib/pengine/pe-input-7.bz2): In-progress
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.8.33 -> 0.8.34 (S_TRANSITION_ENGINE)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action Target_iscsi1_monitor_0 (12) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 7 (Complete=5, Pending=1, Fired=0, Skipped=0, Incomplete=7, Source=/var/lib/pengine/pe-input-7.bz2): In-progress
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.8.34 -> 0.8.35 (S_TRANSITION_ENGINE)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action Lun_iscsi1_monitor_0 (13) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 11: probe_complete probe_complete on Cluster-Server-1 - no waiting
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: te_pseudo_action: Pseudo action 10 fired and confirmed
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 7 (Complete=6, Pending=0, Fired=2, Skipped=0, Incomplete=5, Source=/var/lib/pengine/pe-input-7.bz2): In-progress
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 74: start Target_iscsi1_start_0 on Cluster-Server-1
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 7 (Complete=8, Pending=1, Fired=1, Skipped=0, Incomplete=4, Source=/var/lib/pengine/pe-input-7.bz2): In-progress
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.8.35 -> 0.8.36 (S_TRANSITION_ENGINE)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action Target_iscsi1_start_0 (74) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 75: monitor Target_iscsi1_monitor_10000 on Cluster-Server-1
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 76: start Lun_iscsi1_start_0 on Cluster-Server-1
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 7 (Complete=9, Pending=2, Fired=2, Skipped=0, Incomplete=2, Source=/var/lib/pengine/pe-input-7.bz2): In-progress
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.8.36 -> 0.8.37 (S_TRANSITION_ENGINE)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action Target_iscsi1_monitor_10000 (75) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 7 (Complete=10, Pending=1, Fired=0, Skipped=0, Incomplete=2, Source=/var/lib/pengine/pe-input-7.bz2): In-progress
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.8.37 -> 0.8.38 (S_TRANSITION_ENGINE)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action Lun_iscsi1_start_0 (76) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: te_pseudo_action: Pseudo action 79 fired and confirmed
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 77: monitor Lun_iscsi1_monitor_10000 on Cluster-Server-1
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 7 (Complete=11, Pending=1, Fired=2, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-7.bz2): In-progress
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 7 (Complete=12, Pending=1, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-7.bz2): In-progress
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.8.38 -> 0.8.39 (S_TRANSITION_ENGINE)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action Lun_iscsi1_monitor_10000 (77) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: notice: run_graph: ==== Transition 7 (Complete=13, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-7.bz2): Complete
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: te_graph_trigger: Transition 7 is now complete
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Transition 7 status: done - <null>
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_TE_SUCCESS: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_TE_SUCCESS from notify_crmd() received in state S_TRANSITION_ENGINE
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: Starting PEngine Recheck Timer
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started PEngine Recheck Timer (I_PE_CALC:900000ms), src=167
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:23:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:23:45 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_iSCSI_Daemon:0 monitor[5] (pid 52401)
Sep 10 15:23:45 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_NFS_Server:0 monitor[8] (pid 52402)
Sep 10 15:23:45 Cluster-Server-1 lrmd: [48712]: info: operation monitor[8] on p_NFS_Server:0 for client 48715: pid 52402 exited with return code 0
Sep 10 15:23:45 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_NFS_Server:0:monitor:stdout) nfsd running

Sep 10 15:23:45 Cluster-Server-1 lrmd: [48712]: info: operation monitor[5] on p_iSCSI_Daemon:0 for client 48715: pid 52401 exited with return code 0
Sep 10 15:23:45 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_PingD:0 monitor[6] (pid 52407)
Sep 10 15:23:45 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_iSCSI_Daemon:1 monitor[5] (pid 52764)
Sep 10 15:23:45 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_NFS_Server:1 monitor[8] (pid 52765)
Sep 10 15:23:45 Cluster-Server-2 lrmd: [40194]: info: operation monitor[5] on p_iSCSI_Daemon:1 for client 40197: pid 52764 exited with return code 0
Sep 10 15:23:45 Cluster-Server-2 lrmd: [40194]: debug: RA output: (p_NFS_Server:1:monitor:stdout) nfsd running

Sep 10 15:23:45 Cluster-Server-2 lrmd: [40194]: info: operation monitor[8] on p_NFS_Server:1 for client 40197: pid 52765 exited with return code 0
Sep 10 15:23:45 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_PingD:1 monitor[6] (pid 52771)
Sep 10 15:23:47 Cluster-Server-1 attrd_updater: [52440]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:23:47 Cluster-Server-1 attrd_updater: [52440]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:23:47 Cluster-Server-1 attrd_updater: [52440]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:23:47 Cluster-Server-1 attrd_updater: [52440]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:23:47 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:23:47 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:23:47 Cluster-Server-1 lrmd: [48712]: info: operation monitor[6] on p_PingD:0 for client 48715: pid 52407 exited with return code 0
Sep 10 15:23:47 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_Device_drive:0 monitor[11] (pid 52441)
drbd(p_Device_drive:0)[52441]:	2012/09/10_15:23:47 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:23:47 Cluster-Server-1 crm_attribute: [52471]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:23:47 Cluster-Server-1 crm_attribute: [52471]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:23:47 Cluster-Server-1 crm_attribute: [52471]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:23:47 Cluster-Server-1 crm_attribute: [52471]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:23:47 Cluster-Server-1 crm_attribute: [52471]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:23:47 Cluster-Server-1 crm_attribute: [52471]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:23:47 Cluster-Server-1 crm_attribute: [52471]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:23:47 Cluster-Server-1 crm_attribute: [52471]: info: determine_host: Mapped Cluster-Server-1 to Cluster-Server-1
Sep 10 15:23:47 Cluster-Server-1 crm_attribute: [52471]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:23:47 Cluster-Server-1 crm_attribute: [52471]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:23:47 Cluster-Server-1 crm_attribute: [52471]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:23:47 Cluster-Server-1 crm_attribute: [52471]: info: attrd_lazy_update: Updated master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:23:47 Cluster-Server-1 crm_attribute: [52471]: info: main: Update master-p_Device_drive:0=10000 sent via attrd
Sep 10 15:23:47 Cluster-Server-1 crm_attribute: [52471]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:23:47 Cluster-Server-1 crm_attribute: [52471]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:23:47 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:0=10000
Sep 10 15:23:47 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:0)[52441]:	2012/09/10_15:23:48 DEBUG: drive: Exit code 0
drbd(p_Device_drive:0)[52441]:	2012/09/10_15:23:48 DEBUG: drive: Command output: 
Sep 10 15:23:47 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_Device_drive:1 monitor[11] (pid 52820)
drbd(p_Device_drive:1)[52820]:	2012/09/10_15:23:47 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:23:47 Cluster-Server-2 crm_attribute: [52850]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:23:47 Cluster-Server-2 crm_attribute: [52850]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:23:47 Cluster-Server-2 crm_attribute: [52850]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:23:47 Cluster-Server-2 crm_attribute: [52850]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:23:47 Cluster-Server-2 crm_attribute: [52850]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:23:47 Cluster-Server-2 crm_attribute: [52850]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:23:47 Cluster-Server-2 crm_attribute: [52850]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:23:47 Cluster-Server-2 crm_attribute: [52850]: info: determine_host: Mapped Cluster-Server-2 to Cluster-Server-2
Sep 10 15:23:47 Cluster-Server-2 crm_attribute: [52850]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:23:47 Cluster-Server-2 crm_attribute: [52850]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:23:47 Cluster-Server-2 crm_attribute: [52850]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:1=10000 for Cluster-Server-2
Sep 10 15:23:47 Cluster-Server-2 crm_attribute: [52850]: info: attrd_lazy_update: Updated master-p_Device_drive:1=10000 for Cluster-Server-2
Sep 10 15:23:47 Cluster-Server-2 crm_attribute: [52850]: info: main: Update master-p_Device_drive:1=10000 sent via attrd
Sep 10 15:23:47 Cluster-Server-2 crm_attribute: [52850]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:23:47 Cluster-Server-2 crm_attribute: [52850]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:23:47 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:1=10000
Sep 10 15:23:47 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:1)[52820]:	2012/09/10_15:23:47 DEBUG: drive: Exit code 0
drbd(p_Device_drive:1)[52820]:	2012/09/10_15:23:47 DEBUG: drive: Command output: 
Sep 10 15:23:47 Cluster-Server-2 lrmd: [40194]: debug: RA output: (p_Device_drive:1:monitor:stdout) 

Sep 10 15:23:47 Cluster-Server-2 lrmd: [40194]: info: operation monitor[11] on p_Device_drive:1 for client 40197: pid 52820 exited with return code 0
Sep 10 15:23:47 Cluster-Server-2 attrd_updater: [52859]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:23:47 Cluster-Server-2 attrd_updater: [52859]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:23:47 Cluster-Server-2 attrd_updater: [52859]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:23:47 Cluster-Server-2 attrd_updater: [52859]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:23:47 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:23:47 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:23:47 Cluster-Server-2 lrmd: [40194]: info: operation monitor[6] on p_PingD:1 for client 40197: pid 52771 exited with return code 0
Sep 10 15:23:48 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_Device_drive:0:monitor:stdout) 

Sep 10 15:23:48 Cluster-Server-1 lrmd: [48712]: info: operation monitor[11] on p_Device_drive:0 for client 48715: pid 52441 exited with return code 8
Sep 10 15:23:53 Cluster-Server-1 lrmd: [48712]: debug: rsc:Target_iscsi1 monitor[15] (pid 52482)
SCSTTarget(Target_iscsi1)[52482]:	2012/09/10_15:23:53 DEBUG: Target_iscsi1 monitor : 0
Sep 10 15:23:53 Cluster-Server-1 lrmd: [48712]: info: operation monitor[15] on Target_iscsi1 for client 48715: pid 52482 exited with return code 0
Sep 10 15:23:53 Cluster-Server-1 lrmd: [48712]: debug: rsc:Lun_iscsi1 monitor[17] (pid 52488)
SCSTLun(Lun_iscsi1)[52488]:	2012/09/10_15:23:53 INFO: Lun_iscsi1 monitor : 0
SCSTLun(Lun_iscsi1)[52488]:	2012/09/10_15:23:53 INFO: Lun_iscsi1 monitor : 0
Sep 10 15:23:53 Cluster-Server-1 lrmd: [48712]: info: operation monitor[17] on Lun_iscsi1 for client 48715: pid 52488 exited with return code 0
Sep 10 15:23:57 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_PingD:0 monitor[6] (pid 52669)
Sep 10 15:23:57 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_PingD:1 monitor[6] (pid 53618)
Sep 10 15:23:58 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_Device_drive:0 monitor[11] (pid 52685)
drbd(p_Device_drive:0)[52685]:	2012/09/10_15:23:58 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:23:58 Cluster-Server-1 crm_attribute: [52715]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:23:58 Cluster-Server-1 crm_attribute: [52715]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:23:58 Cluster-Server-1 crm_attribute: [52715]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:23:58 Cluster-Server-1 crm_attribute: [52715]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:23:58 Cluster-Server-1 crm_attribute: [52715]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:23:58 Cluster-Server-1 crm_attribute: [52715]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:23:58 Cluster-Server-1 crm_attribute: [52715]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:23:58 Cluster-Server-1 crm_attribute: [52715]: info: determine_host: Mapped Cluster-Server-1 to Cluster-Server-1
Sep 10 15:23:58 Cluster-Server-1 crm_attribute: [52715]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:23:58 Cluster-Server-1 crm_attribute: [52715]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:23:58 Cluster-Server-1 crm_attribute: [52715]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:23:58 Cluster-Server-1 crm_attribute: [52715]: info: attrd_lazy_update: Updated master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:23:58 Cluster-Server-1 crm_attribute: [52715]: info: main: Update master-p_Device_drive:0=10000 sent via attrd
Sep 10 15:23:58 Cluster-Server-1 crm_attribute: [52715]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:23:58 Cluster-Server-1 crm_attribute: [52715]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:23:58 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:0=10000
Sep 10 15:23:58 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:0)[52685]:	2012/09/10_15:23:58 DEBUG: drive: Exit code 0
drbd(p_Device_drive:0)[52685]:	2012/09/10_15:23:58 DEBUG: drive: Command output: 
Sep 10 15:23:58 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_Device_drive:0:monitor:stdout) 

Sep 10 15:23:58 Cluster-Server-1 lrmd: [48712]: info: operation monitor[11] on p_Device_drive:0 for client 48715: pid 52685 exited with return code 8
Sep 10 15:23:59 Cluster-Server-1 attrd_updater: [52724]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:23:59 Cluster-Server-1 attrd_updater: [52724]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:23:59 Cluster-Server-1 attrd_updater: [52724]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:23:59 Cluster-Server-1 attrd_updater: [52724]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:23:59 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:23:59 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:23:59 Cluster-Server-1 lrmd: [48712]: info: operation monitor[6] on p_PingD:0 for client 48715: pid 52669 exited with return code 0
Sep 10 15:23:59 Cluster-Server-2 attrd_updater: [53827]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:23:59 Cluster-Server-2 attrd_updater: [53827]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:23:59 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:23:59 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:23:59 Cluster-Server-2 attrd_updater: [53827]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:23:59 Cluster-Server-2 attrd_updater: [53827]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:23:59 Cluster-Server-2 lrmd: [40194]: info: operation monitor[6] on p_PingD:1 for client 40197: pid 53618 exited with return code 0
Sep 10 15:24:03 Cluster-Server-1 lrmd: [48712]: debug: rsc:Target_iscsi1 monitor[15] (pid 52918)
SCSTTarget(Target_iscsi1)[52918]:	2012/09/10_15:24:03 DEBUG: Target_iscsi1 monitor : 0
Sep 10 15:24:03 Cluster-Server-1 lrmd: [48712]: info: operation monitor[15] on Target_iscsi1 for client 48715: pid 52918 exited with return code 0
Sep 10 15:24:03 Cluster-Server-1 lrmd: [48712]: debug: rsc:Lun_iscsi1 monitor[17] (pid 52924)
SCSTLun(Lun_iscsi1)[52924]:	2012/09/10_15:24:03 INFO: Lun_iscsi1 monitor : 0
SCSTLun(Lun_iscsi1)[52924]:	2012/09/10_15:24:03 INFO: Lun_iscsi1 monitor : 0
Sep 10 15:24:03 Cluster-Server-1 lrmd: [48712]: info: operation monitor[17] on Lun_iscsi1 for client 48715: pid 52924 exited with return code 0
Sep 10 15:24:07 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_Device_drive:1 monitor[11] (pid 54512)
drbd(p_Device_drive:1)[54512]:	2012/09/10_15:24:07 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:24:07 Cluster-Server-2 crm_attribute: [54542]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:24:07 Cluster-Server-2 crm_attribute: [54542]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:24:07 Cluster-Server-2 crm_attribute: [54542]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:24:07 Cluster-Server-2 crm_attribute: [54542]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:24:07 Cluster-Server-2 crm_attribute: [54542]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:24:07 Cluster-Server-2 crm_attribute: [54542]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:24:07 Cluster-Server-2 crm_attribute: [54542]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:24:07 Cluster-Server-2 crm_attribute: [54542]: info: determine_host: Mapped Cluster-Server-2 to Cluster-Server-2
Sep 10 15:24:07 Cluster-Server-2 crm_attribute: [54542]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:24:07 Cluster-Server-2 crm_attribute: [54542]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:24:07 Cluster-Server-2 crm_attribute: [54542]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:1=10000 for Cluster-Server-2
Sep 10 15:24:07 Cluster-Server-2 crm_attribute: [54542]: info: attrd_lazy_update: Updated master-p_Device_drive:1=10000 for Cluster-Server-2
Sep 10 15:24:07 Cluster-Server-2 crm_attribute: [54542]: info: main: Update master-p_Device_drive:1=10000 sent via attrd
Sep 10 15:24:07 Cluster-Server-2 crm_attribute: [54542]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:24:07 Cluster-Server-2 crm_attribute: [54542]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:24:07 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:1=10000
Sep 10 15:24:07 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:1)[54512]:	2012/09/10_15:24:07 DEBUG: drive: Exit code 0
drbd(p_Device_drive:1)[54512]:	2012/09/10_15:24:07 DEBUG: drive: Command output: 
Sep 10 15:24:07 Cluster-Server-2 lrmd: [40194]: debug: RA output: (p_Device_drive:1:monitor:stdout) 

Sep 10 15:24:07 Cluster-Server-2 lrmd: [40194]: info: operation monitor[11] on p_Device_drive:1 for client 40197: pid 54512 exited with return code 0
Sep 10 15:24:08 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_Device_drive:0 monitor[11] (pid 52932)
drbd(p_Device_drive:0)[52932]:	2012/09/10_15:24:08 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:24:08 Cluster-Server-1 crm_attribute: [52962]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:24:08 Cluster-Server-1 crm_attribute: [52962]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:24:08 Cluster-Server-1 crm_attribute: [52962]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:24:08 Cluster-Server-1 crm_attribute: [52962]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:24:08 Cluster-Server-1 crm_attribute: [52962]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:24:08 Cluster-Server-1 crm_attribute: [52962]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:24:08 Cluster-Server-1 crm_attribute: [52962]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:24:08 Cluster-Server-1 crm_attribute: [52962]: info: determine_host: Mapped Cluster-Server-1 to Cluster-Server-1
Sep 10 15:24:08 Cluster-Server-1 crm_attribute: [52962]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:24:08 Cluster-Server-1 crm_attribute: [52962]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:24:08 Cluster-Server-1 crm_attribute: [52962]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:24:08 Cluster-Server-1 crm_attribute: [52962]: info: attrd_lazy_update: Updated master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:24:08 Cluster-Server-1 crm_attribute: [52962]: info: main: Update master-p_Device_drive:0=10000 sent via attrd
Sep 10 15:24:08 Cluster-Server-1 crm_attribute: [52962]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:24:08 Cluster-Server-1 crm_attribute: [52962]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:24:08 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:0=10000
Sep 10 15:24:08 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:0)[52932]:	2012/09/10_15:24:08 DEBUG: drive: Exit code 0
drbd(p_Device_drive:0)[52932]:	2012/09/10_15:24:08 DEBUG: drive: Command output: 
Sep 10 15:24:08 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_Device_drive:0:monitor:stdout) 

Sep 10 15:24:08 Cluster-Server-1 lrmd: [48712]: info: operation monitor[11] on p_Device_drive:0 for client 48715: pid 52932 exited with return code 8
Sep 10 15:24:09 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_PingD:0 monitor[6] (pid 52969)
Sep 10 15:24:09 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_PingD:1 monitor[6] (pid 54599)
Sep 10 15:24:11 Cluster-Server-1 attrd_updater: [53169]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:24:11 Cluster-Server-1 attrd_updater: [53169]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:24:11 Cluster-Server-1 attrd_updater: [53169]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:24:11 Cluster-Server-1 attrd_updater: [53169]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:24:11 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:24:11 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:24:11 Cluster-Server-1 lrmd: [48712]: info: operation monitor[6] on p_PingD:0 for client 48715: pid 52969 exited with return code 0
Sep 10 15:24:11 Cluster-Server-2 attrd_updater: [54887]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:24:11 Cluster-Server-2 attrd_updater: [54887]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:24:11 Cluster-Server-2 attrd_updater: [54887]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:24:11 Cluster-Server-2 attrd_updater: [54887]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:24:11 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:24:11 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:24:11 Cluster-Server-2 lrmd: [40194]: info: operation monitor[6] on p_PingD:1 for client 40197: pid 54599 exited with return code 0
Sep 10 15:24:13 Cluster-Server-1 lrmd: [48712]: debug: rsc:Target_iscsi1 monitor[15] (pid 53183)
SCSTTarget(Target_iscsi1)[53183]:	2012/09/10_15:24:13 DEBUG: Target_iscsi1 monitor : 0
Sep 10 15:24:13 Cluster-Server-1 lrmd: [48712]: info: operation monitor[15] on Target_iscsi1 for client 48715: pid 53183 exited with return code 0
Sep 10 15:24:13 Cluster-Server-1 lrmd: [48712]: debug: rsc:Lun_iscsi1 monitor[17] (pid 53189)
SCSTLun(Lun_iscsi1)[53189]:	2012/09/10_15:24:13 INFO: Lun_iscsi1 monitor : 0
SCSTLun(Lun_iscsi1)[53189]:	2012/09/10_15:24:13 INFO: Lun_iscsi1 monitor : 0
Sep 10 15:24:13 Cluster-Server-1 lrmd: [48712]: info: operation monitor[17] on Lun_iscsi1 for client 48715: pid 53189 exited with return code 0
Sep 10 15:24:14 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [53288] registered
Sep 10 15:24:14 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:53288] disconnected.
Sep 10 15:24:14 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:53288] is unregistered
Sep 10 15:24:14 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [53290] registered
Sep 10 15:24:14 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:53290] disconnected.
Sep 10 15:24:14 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:53290] is unregistered
Sep 10 15:24:14 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [53292] registered
Sep 10 15:24:14 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:53292] disconnected.
Sep 10 15:24:14 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:53292] is unregistered
Sep 10 15:24:14 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [53294] registered
Sep 10 15:24:14 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:53294] disconnected.
Sep 10 15:24:14 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:53294] is unregistered
Sep 10 15:24:14 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [53303] registered
Sep 10 15:24:14 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:53303] disconnected.
Sep 10 15:24:14 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:53303] is unregistered
Sep 10 15:24:14 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [53312] registered
Sep 10 15:24:14 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:53312] disconnected.
Sep 10 15:24:14 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:53312] is unregistered
Sep 10 15:24:14 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [53319] registered
Sep 10 15:24:14 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:53319] disconnected.
Sep 10 15:24:14 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:53319] is unregistered
Sep 10 15:24:14 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [53326] registered
Sep 10 15:24:14 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:53326] disconnected.
Sep 10 15:24:14 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:53326] is unregistered
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: crm_uptime: Current CPU usage is: 0s, 60000us
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: crm_compare_age: Loose: 60000 vs 210000 (usec)
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: info: do_election_count_vote: Election 8 (owner: Cluster-Server-2) lost: vote from Cluster-Server-2 (Uptime)
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: update_dc: Unset DC. Was Cluster-Server-2
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: do_election_check: Ignore election check: we not in an election
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_PENDING: [ state=S_NOT_DC cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: do_log: FSA: Input I_PENDING from do_election_count_vote() received in state S_NOT_DC
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: notice: do_state_transition: State transition S_NOT_DC -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_START
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: crm_timer_start: Started Election Trigger (I_DC_TIMEOUT:20000ms), src=45
Sep 10 15:24:15 Cluster-Server-1 cib: [48709]: debug: activateCibXml: Triggering CIB write for cib_apply_diff op
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: handle_request: Raising I_JOIN_OFFER: join-6
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_JOIN_OFFER: [ state=S_PENDING cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_REQUEST
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: info: update_dc: Set DC to Cluster-Server-2 (3.0.6)
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: do_cl_join_offer_respond: do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:24:15 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='master-p_Device_drive:1'] does not exist
Sep 10 15:24:15 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update -22 for master-p_Device_drive:1=(null) passed
Sep 10 15:24:15 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:24:15 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-probe_complete" name="probe_complete" value="true" />
Sep 10 15:24:15 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:24:15 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-pingd" name="pingd" value="100" />
Sep 10 15:24:15 Cluster-Server-1 cib: [48709]: debug: Forking temp process write_cib_contents
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: join_query_callback: Respond to join offer join-6
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: join_query_callback: Acknowledging Cluster-Server-2 as our DC
Sep 10 15:24:15 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 59 for probe_complete=true passed
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: handle_request: Raising I_JOIN_RESULT: join-6
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_PENDING cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: do_cl_join_finalize_respond: Confirming join join-6: join_ack_nack
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_NFS_Server:0 after complete start op (interval=0)
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_NFS_Server:0 after complete monitor op (interval=30000)
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_PingD:0 after complete monitor op (interval=0)
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_PingD:0 after complete monitor op (interval=10000)
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce LVM_drive after complete monitor op (interval=0)
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_Device_drive:0 after complete monitor op (interval=0)
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_Device_drive:0 after complete monitor op (interval=10000)
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_iSCSI_Daemon:0 after complete monitor op (interval=0)
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_iSCSI_Daemon:0 after complete monitor op (interval=30000)
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Target_iscsi1 after complete start op (interval=0)
Sep 10 15:24:15 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 61 for pingd=100 passed
Sep 10 15:24:15 Cluster-Server-1 cib: [48709]: debug: activateCibXml: Triggering CIB write for cib_replace op
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Target_iscsi1 after complete monitor op (interval=10000)
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Lun_iscsi1 after complete start op (interval=0)
Sep 10 15:24:15 Cluster-Server-1 cib: [48709]: info: Managed write_cib_contents process 53340 exited with return code 0.
Sep 10 15:24:15 Cluster-Server-1 cib: [48709]: debug: Forking temp process write_cib_contents
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Lun_iscsi1 after complete monitor op (interval=10000)
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: do_cl_join_finalize_respond: join-6: Join complete.  Sending local LRM status to Cluster-Server-2
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: attrd_update_delegate: Sent update: (null)=(null) for localhost
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_NOT_DC: [ state=S_PENDING cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: do_log: FSA: Input I_NOT_DC from do_cl_join_finalize_respond() received in state S_PENDING
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: notice: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:24:15 Cluster-Server-1 attrd: [48713]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
Sep 10 15:24:15 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_Device_drive:0 (10000)
Sep 10 15:24:15 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='master-p_Device_drive:0'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[3])
Sep 10 15:24:15 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-master-p_Device_drive.0" name="master-p_Device_drive:0" value="10000" />
Sep 10 15:24:15 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Sep 10 15:24:15 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:24:15 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-probe_complete" name="probe_complete" value="true" />
Sep 10 15:24:15 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: pingd (100)
Sep 10 15:24:15 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:24:15 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-pingd" name="pingd" value="100" />
Sep 10 15:24:15 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 63 for master-p_Device_drive:0=10000 passed
Sep 10 15:24:15 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 65 for probe_complete=true passed
Sep 10 15:24:15 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 67 for pingd=100 passed
Sep 10 15:24:15 Cluster-Server-1 lrmd: [48712]: debug: on_msg_add_rsc:client [48715] adds resource Target_iscsi2
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=14:8:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=Target_iscsi2_monitor_0
Sep 10 15:24:15 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op:2399: copying parameters for rsc Target_iscsi2
Sep 10 15:24:15 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation monitor[18] on Target_iscsi2 for client 48715, its parameters: crm_feature_set=[3.0.6] iqn=[iqn.2005-07.com.example:vdisk.iscsi2] CRM_meta_timeout=[20000]  to the operation list.
Sep 10 15:24:15 Cluster-Server-1 lrmd: [48712]: info: rsc:Target_iscsi2 probe[18] (pid 53352)
Sep 10 15:24:15 Cluster-Server-1 lrmd: [48712]: debug: on_msg_add_rsc:client [48715] adds resource Lun_iscsi2
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=15:8:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=Lun_iscsi2_monitor_0
Sep 10 15:24:15 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op:2399: copying parameters for rsc Lun_iscsi2
Sep 10 15:24:15 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation monitor[19] on Lun_iscsi2 for client 48715, its parameters: path=[/dev/drive-CSD/iscsi2_iSCSI] crm_feature_set=[3.0.6] lun=[0] handler=[vdisk_blockio] device_name=[iscsi2] CRM_meta_timeout=[20000] target_iqn=[iqn.2005-07.com.example:vdisk.iscsi2]  to the operation list.
Sep 10 15:24:15 Cluster-Server-1 lrmd: [48712]: info: rsc:Lun_iscsi2 probe[19] (pid 53353)
Sep 10 15:24:15 Cluster-Server-1 cib: [48709]: info: Managed write_cib_contents process 53348 exited with return code 0.
Sep 10 15:24:15 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='master-p_Device_drive:1'] does not exist
Sep 10 15:24:15 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update -22 for master-p_Device_drive:1=(null) passed
Sep 10 15:24:15 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:24:15 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-probe_complete" name="probe_complete" value="true" />
Sep 10 15:24:15 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:24:15 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-pingd" name="pingd" value="100" />
Sep 10 15:24:15 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 70 for probe_complete=true passed
SCSTTarget(Target_iscsi2)[53352]:	2012/09/10_15:24:15 DEBUG: Target_iscsi2 monitor : 7
Sep 10 15:24:15 Cluster-Server-1 lrmd: [48712]: WARN: Managed Target_iscsi2:monitor process 53352 exited with return code 7.
Sep 10 15:24:15 Cluster-Server-1 lrmd: [48712]: info: operation monitor[18] on Target_iscsi2 for client 48715: pid 53352 exited with return code 7
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce Target_iscsi2 after complete monitor op (interval=0)
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation Target_iscsi2_monitor_0 (call=18, rc=7, cib-update=40, confirmed=true) not running
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending monitor op to history for 'Target_iscsi2'
Sep 10 15:24:15 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 72 for pingd=100 passed
SCSTLun(Lun_iscsi2)[53353]:	2012/09/10_15:24:15 INFO: Lun_iscsi2 monitor : 7
SCSTLun(Lun_iscsi2)[53353]:	2012/09/10_15:24:15 INFO: Lun_iscsi2 monitor : 7
Sep 10 15:24:15 Cluster-Server-1 lrmd: [48712]: WARN: Managed Lun_iscsi2:monitor process 53353 exited with return code 7.
Sep 10 15:24:15 Cluster-Server-1 lrmd: [48712]: info: operation monitor[19] on Lun_iscsi2 for client 48715: pid 53353 exited with return code 7
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce Lun_iscsi2 after complete monitor op (interval=0)
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation Lun_iscsi2_monitor_0 (call=19, rc=7, cib-update=41, confirmed=true) not running
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending monitor op to history for 'Lun_iscsi2'
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: attrd_update_delegate: Sent update: probe_complete=true for localhost
Sep 10 15:24:15 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crmd: probe_complete=true
Sep 10 15:24:15 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: true, Current: true, Stored: true
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=84:8:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=Target_iscsi2_start_0
Sep 10 15:24:15 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op:2399: copying parameters for rsc Target_iscsi2
Sep 10 15:24:15 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation start[20] on Target_iscsi2 for client 48715, its parameters: crm_feature_set=[3.0.6] CRM_meta_name=[start] iqn=[iqn.2005-07.com.example:vdisk.iscsi2] CRM_meta_timeout=[240000]  to the operation list.
Sep 10 15:24:15 Cluster-Server-1 lrmd: [48712]: info: rsc:Target_iscsi2 start[20] (pid 53368)
SCSTTarget(Target_iscsi2)[53368]:	2012/09/10_15:24:15 INFO: target iqn.2005-07.com.example:vdisk.iscsi2: Starting...
SCSTTarget(Target_iscsi2)[53368]:	2012/09/10_15:24:15 INFO: target iqn.2005-07.com.example:vdisk.iscsi2: Starting...
SCSTTarget(Target_iscsi2)[53368]:	2012/09/10_15:24:15 INFO: Enabling target iqn.2005-07.com.example:vdisk.iscsi2
SCSTTarget(Target_iscsi2)[53368]:	2012/09/10_15:24:15 INFO: Enabling target iqn.2005-07.com.example:vdisk.iscsi2
SCSTTarget(Target_iscsi2)[53368]:	2012/09/10_15:24:15 DEBUG: SCST target iqn.2005-07.com.example:vdisk.iscsi2: Started.
SCSTTarget(Target_iscsi2)[53368]:	2012/09/10_15:24:15 DEBUG: Target_iscsi2 start : 0
Sep 10 15:24:15 Cluster-Server-1 lrmd: [48712]: info: Managed Target_iscsi2:start process 53368 exited with return code 0.
Sep 10 15:24:15 Cluster-Server-1 lrmd: [48712]: info: operation start[20] on Target_iscsi2 for client 48715: pid 53368 exited with return code 0
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce Target_iscsi2 after complete start op (interval=0)
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation Target_iscsi2_start_0 (call=20, rc=0, cib-update=42, confirmed=true) ok
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending start op to history for 'Target_iscsi2'
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=85:8:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=Target_iscsi2_monitor_10000
Sep 10 15:24:15 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation monitor[21] on Target_iscsi2 for client 48715, its parameters: crm_feature_set=[3.0.6] CRM_meta_name=[monitor] iqn=[iqn.2005-07.com.example:vdisk.iscsi2] CRM_meta_timeout=[60000] CRM_meta_interval=[10000]  to the operation list.
Sep 10 15:24:15 Cluster-Server-1 lrmd: [48712]: info: rsc:Target_iscsi2 monitor[21] (pid 53388)
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=86:8:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=Lun_iscsi2_start_0
Sep 10 15:24:15 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op:2399: copying parameters for rsc Lun_iscsi2
Sep 10 15:24:15 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation start[22] on Lun_iscsi2 for client 48715, its parameters: handler=[vdisk_blockio] CRM_meta_timeout=[60000] CRM_meta_name=[start] target_iqn=[iqn.2005-07.com.example:vdisk.iscsi2] path=[/dev/drive-CSD/iscsi2_iSCSI] crm_feature_set=[3.0.6] lun=[0] device_name=[iscsi2]  to the operation list.
Sep 10 15:24:15 Cluster-Server-1 lrmd: [48712]: info: rsc:Lun_iscsi2 start[22] (pid 53389)
SCSTTarget(Target_iscsi2)[53388]:	2012/09/10_15:24:15 DEBUG: Target_iscsi2 monitor : 0
Sep 10 15:24:15 Cluster-Server-1 lrmd: [48712]: info: Managed Target_iscsi2:monitor process 53388 exited with return code 0.
Sep 10 15:24:15 Cluster-Server-1 lrmd: [48712]: info: operation monitor[21] on Target_iscsi2 for client 48715: pid 53388 exited with return code 0
SCSTLun(Lun_iscsi2)[53389]:	2012/09/10_15:24:15 INFO: Disabling target iqn.2005-07.com.example:vdisk.iscsi2
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce Target_iscsi2 after complete monitor op (interval=10000)
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation Target_iscsi2_monitor_10000 (call=21, rc=0, cib-update=43, confirmed=false) ok
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending monitor op to history for 'Target_iscsi2'
SCSTLun(Lun_iscsi2)[53389]:	2012/09/10_15:24:15 INFO: Disabling target iqn.2005-07.com.example:vdisk.iscsi2
SCSTLun(Lun_iscsi2)[53389]:	2012/09/10_15:24:15 INFO: Starting lun 0 on target iqn.2005-07.com.example:vdisk.iscsi2
SCSTLun(Lun_iscsi2)[53389]:	2012/09/10_15:24:15 INFO: Starting lun 0 on target iqn.2005-07.com.example:vdisk.iscsi2
SCSTLun(Lun_iscsi2)[53389]:	2012/09/10_15:24:15 INFO: Opening device iscsi2, target iqn.2005-07.com.example:vdisk.iscsi2
SCSTLun(Lun_iscsi2)[53389]:	2012/09/10_15:24:15 INFO: Opening device iscsi2, target iqn.2005-07.com.example:vdisk.iscsi2
SCSTLun(Lun_iscsi2)[53389]:	2012/09/10_15:24:15 INFO: Adding LUN 0, device iscsi2, target iqn.2005-07.com.example:vdisk.iscsi2
SCSTLun(Lun_iscsi2)[53389]:	2012/09/10_15:24:15 INFO: Adding LUN 0, device iscsi2, target iqn.2005-07.com.example:vdisk.iscsi2
SCSTLun(Lun_iscsi2)[53389]:	2012/09/10_15:24:15 INFO: Enabling target iqn.2005-07.com.example:vdisk.iscsi2
SCSTLun(Lun_iscsi2)[53389]:	2012/09/10_15:24:15 INFO: Enabling target iqn.2005-07.com.example:vdisk.iscsi2
SCSTLun(Lun_iscsi2)[53389]:	2012/09/10_15:24:15 INFO: Started lun 0 on target iqn.2005-07.com.example:vdisk.iscsi2
SCSTLun(Lun_iscsi2)[53389]:	2012/09/10_15:24:15 INFO: Started lun 0 on target iqn.2005-07.com.example:vdisk.iscsi2
SCSTLun(Lun_iscsi2)[53389]:	2012/09/10_15:24:15 INFO: Lun_iscsi2 start : 0
SCSTLun(Lun_iscsi2)[53389]:	2012/09/10_15:24:15 INFO: Lun_iscsi2 start : 0
Sep 10 15:24:15 Cluster-Server-1 lrmd: [48712]: info: Managed Lun_iscsi2:start process 53389 exited with return code 0.
Sep 10 15:24:15 Cluster-Server-1 lrmd: [48712]: info: operation start[22] on Lun_iscsi2 for client 48715: pid 53389 exited with return code 0
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce Lun_iscsi2 after complete start op (interval=0)
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation Lun_iscsi2_start_0 (call=22, rc=0, cib-update=44, confirmed=true) ok
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending start op to history for 'Lun_iscsi2'
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=87:8:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=Lun_iscsi2_monitor_10000
Sep 10 15:24:15 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation monitor[23] on Lun_iscsi2 for client 48715, its parameters: handler=[vdisk_blockio] CRM_meta_timeout=[20000] CRM_meta_name=[monitor] target_iqn=[iqn.2005-07.com.example:vdisk.iscsi2] path=[/dev/drive-CSD/iscsi2_iSCSI] crm_feature_set=[3.0.6] CRM_meta_interval=[10000] lun=[0] device_name=[iscsi2]  to the operation list.
Sep 10 15:24:15 Cluster-Server-1 lrmd: [48712]: info: rsc:Lun_iscsi2 monitor[23] (pid 53436)
SCSTLun(Lun_iscsi2)[53436]:	2012/09/10_15:24:15 INFO: Lun_iscsi2 monitor : 0
SCSTLun(Lun_iscsi2)[53436]:	2012/09/10_15:24:15 INFO: Lun_iscsi2 monitor : 0
Sep 10 15:24:15 Cluster-Server-1 lrmd: [48712]: info: Managed Lun_iscsi2:monitor process 53436 exited with return code 0.
Sep 10 15:24:15 Cluster-Server-1 lrmd: [48712]: info: operation monitor[23] on Lun_iscsi2 for client 48715: pid 53436 exited with return code 0
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce Lun_iscsi2 after complete monitor op (interval=10000)
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation Lun_iscsi2_monitor_10000 (call=23, rc=0, cib-update=45, confirmed=false) ok
Sep 10 15:24:15 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending monitor op to history for 'Lun_iscsi2'
Sep 10 15:24:15 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_NFS_Server:0 monitor[8] (pid 53444)
Sep 10 15:24:15 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_iSCSI_Daemon:0 monitor[5] (pid 53445)
Sep 10 15:24:15 Cluster-Server-1 lrmd: [48712]: info: operation monitor[8] on p_NFS_Server:0 for client 48715: pid 53444 exited with return code 0
Sep 10 15:24:15 Cluster-Server-1 lrmd: [48712]: info: operation monitor[5] on p_iSCSI_Daemon:0 for client 48715: pid 53445 exited with return code 0
Sep 10 15:24:15 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_NFS_Server:0:monitor:stdout) nfsd running

Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: debug: activateCibXml: Triggering CIB write for cib_replace op
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_replace): 0.8.39 -> 0.9.1 (S_IDLE)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.9.1) : Non-status change
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause <diff crm_feature_set="3.0.6" >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   <diff-removed admin_epoch="0" epoch="8" num_updates="39" >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     <cib admin_epoch="0" epoch="8" num_updates="39" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   </diff-removed>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   <diff-added >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     <cib epoch="9" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="Cluster-Server-1" update-client="cibadmin" cib-last-written="Mon Sep 10 15:23:43 2012" have-quorum="1" dc-uuid="Cluster-Server-2" >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       <configuration >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         <resources >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <group id="iSCSI_iscsi2" __crm_diff_marker__="added:top" >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <primitive class="ocf" id="Target_iscsi2" provider="nas" type="SCSTTarget" >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <instance_attributes id="Target_iscsi2-instance_attributes" >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="Target_iscsi2-instance_attributes-iqn" name="iqn" value="iqn.2005-07.com.example:vdisk.iscsi2" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </instance_attributes>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <operations >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="Target_iscsi2-monitor-10" interval="10" name="monitor" timeout="60" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="Target_iscsi2-start-0" interval="0" name="start" timeout="240" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="Target_iscsi2-stop-0" interval="0" name="stop" timeout="240" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </operations>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <meta_attributes id="Target_iscsi2-meta_attributes" >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="Target_iscsi2-meta_attributes-target-role" name="target-role" value="Started" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </meta_attributes>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             </primitive>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <primitive class="ocf" id="Lun_iscsi2" provider="nas" type="SCSTLun" >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <instance_attributes id="Lun_iscsi2-instance_attributes" >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="Lun_iscsi2-instance_attributes-target_iqn" name="target_iqn" value="iqn.2005-07.com.example:vdisk.iscsi2" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="Lun_iscsi2-instance_attributes-lun" name="lun" value="0" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="Lun_iscsi2-instance_attributes-path" name="path" value="/dev/drive-CSD/iscsi2_iSCSI" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="Lun_iscsi2-instance_attributes-device_name" name="device_name" value="iscsi2" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="Lun_iscsi2-instance_attributes-handler" name="handler" value="vdisk_blockio" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </instance_attributes>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <operations >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="Lun_iscsi2-monitor-10" interval="10" name="monitor" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="Lun_iscsi2-start-0" interval="0" name="start" timeout="60" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="Lun_iscsi2-stop-0" interval="0" name="stop" timeout="240" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </operations>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <meta_attributes id="Lun_iscsi2-meta_attributes" >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="Lun_iscsi2-meta_attributes-target-role" name="target-role" value="Started" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </meta_attributes>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             </primitive>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           </group>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         </resources>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         <constraints >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <rsc_order first="LVM_drive" id="iSCSI_iscsi2_after_LVM_drive" score="INFINITY" then="iSCSI_iscsi2" __crm_diff_marker__="added:top" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <rsc_order first="iSCSI_Daemon" id="iSCSI_iscsi2_after_iSCSI_Daemon" score="INFINITY" then="iSCSI_iscsi2" __crm_diff_marker__="added:top" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <rsc_colocation id="iSCSI_iscsi2_with_LVM_drive" rsc="iSCSI_iscsi2" score="INFINITY" with-rsc="LVM_drive" __crm_diff_marker__="added:top" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <rsc_colocation id="iSCSI_iscsi2_with_iSCSI_Daemon" rsc="iSCSI_iscsi2" score="INFINITY" with-rsc="iSCSI_Daemon" __crm_diff_marker__="added:top" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         </constraints>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       </configuration>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     </cib>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   </diff-added>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause </diff>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_IDLE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes are eligible to run resources.
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib_replace_notify: Replaced: 0.8.39 -> 0.9.1 from Cluster-Server-1
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 138: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:24:15 Cluster-Server-2 attrd: [40195]: info: do_cib_replaced: Sending full refresh
Sep 10 15:24:15 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_Device_drive:1 (10000)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_cib_replaced: Updating the CIB after a replace: DC=true
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-1: true (overwrite=true) hash_size=2
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-2: true (overwrite=true) hash_size=2
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: - <cib admin_epoch="0" epoch="8" num_updates="39" />
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: + <cib epoch="9" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="Cluster-Server-1" update-client="cibadmin" cib-last-written="Mon Sep 10 15:23:43 2012" have-quorum="1" dc-uuid="Cluster-Server-2" >
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: +   <configuration >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: +     <resources >
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <group id="iSCSI_iscsi2" __crm_diff_marker__="added:top" >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: update_dc: Unset DC. Was Cluster-Server-2
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: crm_uptime: Current CPU usage is: 0s, 210000us
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_election_vote: Started election 8
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started Election Timeout (I_ELECTION_DC:120000ms), src=171
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: +         <primitive class="ocf" id="Target_iscsi2" provider="nas" type="SCSTTarget" >
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <instance_attributes id="Target_iscsi2-instance_attributes" >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_election_count_vote: Created voted hash
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: crm_uptime: Current CPU usage is: 0s, 210000us
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_election_count_vote: Election 8 (current: 8, owner: Cluster-Server-2): Processed vote from Cluster-Server-2 (Recorded)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="Target_iscsi2-instance_attributes-iqn" name="iqn" value="iqn.2005-07.com.example:vdisk.iscsi2" />
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: +           </instance_attributes>
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <operations >
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <op id="Target_iscsi2-monitor-10" interval="10" name="monitor" timeout="60" />
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <op id="Target_iscsi2-start-0" interval="0" name="start" timeout="240" />
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <op id="Target_iscsi2-stop-0" interval="0" name="stop" timeout="240" />
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: +           </operations>
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <meta_attributes id="Target_iscsi2-meta_attributes" >
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="Target_iscsi2-meta_attributes-target-role" name="target-role" value="Started" />
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: +           </meta_attributes>
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: +         </primitive>
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: +         <primitive class="ocf" id="Lun_iscsi2" provider="nas" type="SCSTLun" >
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <instance_attributes id="Lun_iscsi2-instance_attributes" >
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="Lun_iscsi2-instance_attributes-target_iqn" name="target_iqn" value="iqn.2005-07.com.example:vdisk.iscsi2" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: crm_uptime: Current CPU usage is: 0s, 210000us
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: crm_compare_age: Win: 210000 vs 0  (usec)
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="Lun_iscsi2-instance_attributes-lun" name="lun" value="0" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_election_count_vote: Election 8 (current: 8, owner: Cluster-Server-2): Processed no-vote from Cluster-Server-1 (Recorded)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="Lun_iscsi2-instance_attributes-path" name="path" value="/dev/drive-CSD/iscsi2_iSCSI" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_election_check: Destroying voted hash
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_ELECTION_DC: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_election_check ]
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_ELECTION_DC from do_election_check() received in state S_ELECTION
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_START
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_te_control: The transitioner is already active
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_START
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pengine
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="Lun_iscsi2-instance_attributes-device_name" name="device_name" value="iscsi2" />
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="Lun_iscsi2-instance_attributes-handler" name="handler" value="vdisk_blockio" />
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: +           </instance_attributes>
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <operations >
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <op id="Lun_iscsi2-monitor-10" interval="10" name="monitor" />
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <op id="Lun_iscsi2-start-0" interval="0" name="start" timeout="60" />
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <op id="Lun_iscsi2-stop-0" interval="0" name="stop" timeout="240" />
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: +           </operations>
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <meta_attributes id="Lun_iscsi2-meta_attributes" >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_START
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started Integration Timer (I_INTEGRATED:180000ms), src=173
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="Lun_iscsi2-meta_attributes-target-role" name="target-role" value="Started" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TAKEOVER
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: +           </meta_attributes>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: info: do_dc_takeover: Taking over DC status for this partition
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: +         </primitive>
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: +       </group>
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: +     </resources>
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: +     <constraints >
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <rsc_order first="LVM_drive" id="iSCSI_iscsi2_after_LVM_drive" score="INFINITY" then="iSCSI_iscsi2" __crm_diff_marker__="added:top" />
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <rsc_order first="iSCSI_Daemon" id="iSCSI_iscsi2_after_iSCSI_Daemon" score="INFINITY" then="iSCSI_iscsi2" __crm_diff_marker__="added:top" />
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <rsc_colocation id="iSCSI_iscsi2_with_LVM_drive" rsc="iSCSI_iscsi2" score="INFINITY" with-rsc="LVM_drive" __crm_diff_marker__="added:top" />
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <rsc_colocation id="iSCSI_iscsi2_with_iSCSI_Daemon" rsc="iSCSI_iscsi2" score="INFINITY" with-rsc="iSCSI_Daemon" __crm_diff_marker__="added:top" />
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: +     </constraints>
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: +   </configuration>
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib:diff: + </cib>
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=Cluster-Server-1/cibadmin/2, version=0.9.1): ok (rc=0)
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/139, version=0.9.2): ok (rc=0)
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: debug: cib_process_readwrite: We are still in R/W mode
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/141, version=0.9.4): ok (rc=0)
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/142, version=0.9.5): ok (rc=0)
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='master-p_Device_drive:1'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[3])
Sep 10 15:24:15 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-master-p_Device_drive.1" name="master-p_Device_drive:1" value="10000" />
Sep 10 15:24:15 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:24:15 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-probe_complete" name="probe_complete" value="true" />
Sep 10 15:24:15 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: pingd (100)
Sep 10 15:24:15 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 58 for master-p_Device_drive:1=10000 passed
Sep 10 15:24:15 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 60 for probe_complete=true passed
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//cluster_property_set//nvpair[@name='dc-version'] (/cib/configuration/crm_config/cluster_property_set/nvpair[1])
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: debug: Forking temp process write_cib_contents
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: find_nvpair_attr_delegate: Match <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff" />
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/144, version=0.9.8): ok (rc=0)
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//cluster_property_set//nvpair[@name='cluster-infrastructure'] (/cib/configuration/crm_config/cluster_property_set/nvpair[2])
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: find_nvpair_attr_delegate: Match <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="cman" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_OFFER_ALL
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: initialize_join: join-6: Initializing join data (flag=true)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: join_make_offer: join-6: Sending offer to Cluster-Server-1
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: join_make_offer: join-6: Sending offer to Cluster-Server-2
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: info: do_dc_join_offer_all: join-6: Waiting on 2 outstanding join acks
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke_callback: Discarding PE request in state: S_INTEGRATION
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/146, version=0.9.9): ok (rc=0)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:15 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-pingd" name="pingd" value="100" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Call 147 : Parsing CIB options
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Shutdown escalation occurs after: 1200000ms
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Checking for expired actions every 900000ms
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: handle_request: Raising I_JOIN_OFFER: join-6
Sep 10 15:24:15 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 62 for pingd=100 passed
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_OFFER: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_REQUEST
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: info: update_dc: Set DC to Cluster-Server-2 (3.0.6)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_cl_join_offer_respond: do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: join_query_callback: Respond to join offer join-6
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: join_query_callback: Acknowledging Cluster-Server-2 as our DC
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_REQ
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: Processing req from Cluster-Server-2
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: join-6: Welcoming node Cluster-Server-2 (ref join_request-crmd-1347283455-85)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: 1 nodes have been integrated into join-6
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: join-6: Still waiting on 1 outstanding offers
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_REQ
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: Processing req from Cluster-Server-1
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: join-6: Welcoming node Cluster-Server-1 (ref join_request-crmd-1347283455-19)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: 2 nodes have been integrated into join-6
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: check_join_state: join-6: Integration of 2 peers complete: do_dc_join_filter_offer
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_INTEGRATED: [ state=S_INTEGRATION cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes responded to the join offer.
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_START
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started Finalization Timer (I_ELECTION:1800000ms), src=177
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_FINALIZE
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_finalize: Finializing join-6 for 2 clients
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: info: do_dc_join_finalize: join-6: Syncing the CIB from Cluster-Server-2 to the rest of the cluster
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: Managed write_cib_contents process 55212 exited with return code 0.
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: debug: sync_our_cib: Syncing CIB to all peers
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/149, version=0.9.12): ok (rc=0)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by finalize_sync_callback in state: S_FINALIZE_JOIN
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: check_join_state: join-6: Still waiting on 2 integrated nodes
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: finalize_sync_callback: Notifying 2 clients of join-6 results
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: finalize_join_for: join-6: ACK'ing join request from Cluster-Server-1, state member
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: finalize_join_for: join-6: ACK'ing join request from Cluster-Server-2, state member
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/150, version=0.9.13): ok (rc=0)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: handle_request: Raising I_JOIN_RESULT: join-6
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_cl_join_finalize_respond: Confirming join join-6: join_ack_nack
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_Device_drive:1 after complete monitor op (interval=0)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_Device_drive:1 after complete monitor op (interval=20000)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce Lun_iscsi1 after complete monitor op (interval=0)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_NFS_Server:1 after complete start op (interval=0)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_NFS_Server:1 after complete monitor op (interval=30000)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_PingD:1 after complete monitor op (interval=0)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_PingD:1 after complete monitor op (interval=10000)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce LVM_drive after complete monitor op (interval=0)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_iSCSI_Daemon:1 after complete monitor op (interval=0)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_iSCSI_Daemon:1 after complete monitor op (interval=30000)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce Target_iscsi1 after complete monitor op (interval=0)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_cl_join_finalize_respond: join-6: Join complete.  Sending local LRM status to Cluster-Server-2
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_ack: Ignoring op=join_ack_nack message from Cluster-Server-2
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/151, version=0.9.14): ok (rc=0)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: info: do_dc_join_ack: join-6: Updating node state to member for Cluster-Server-2
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: info: erase_status_tag: Deleting xpath: //node_state[@uname='Cluster-Server-2']/lrm
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_ack: join-6: Registered callback for LRM update 153
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-2']/lrm (/cib/status/node_state[2]/lrm)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-2']/lrm (origin=local/crmd/152, version=0.9.15): ok (rc=0)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: erase_xpath_callback: Deletion of "//node_state[@uname='Cluster-Server-2']/lrm": ok (rc=0)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: join_update_complete_callback: Join update 153 complete
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by join_update_complete_callback in state: S_FINALIZE_JOIN
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: check_join_state: join-6: Still waiting on 1 finalized nodes
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: info: do_dc_join_ack: join-6: Updating node state to member for Cluster-Server-1
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: info: erase_status_tag: Deleting xpath: //node_state[@uname='Cluster-Server-1']/lrm
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_ack: join-6: Registered callback for LRM update 155
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-1']/lrm (/cib/status/node_state[1]/lrm)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-1']/lrm (origin=local/crmd/154, version=0.9.20): ok (rc=0)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: erase_xpath_callback: Deletion of "//node_state[@uname='Cluster-Server-1']/lrm": ok (rc=0)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: join_update_complete_callback: Join update 155 complete
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by join_update_complete_callback in state: S_FINALIZE_JOIN
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: check_join_state: join-6 complete: join_update_complete_callback
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_FINALIZED: [ state=S_FINALIZE_JOIN cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-1: true (overwrite=true) hash_size=2
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-2: true (overwrite=true) hash_size=2
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes are eligible to run resources.
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_FINAL
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_final: Ensuring DC, quorum and node attributes are up-to-date
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: attrd_update_delegate: Sent update: (null)=(null) for localhost
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: crm_update_quorum: Updating quorum status to true (call=158)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_CANCEL
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_te_invoke: Cancelling the transition: inactive
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: do_te_invoke:162 - Triggered transition abort (complete=1) : Peer Cancelled
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 159: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/156, version=0.9.22): ok (rc=0)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.9.21 -> 0.9.22 (S_POLICY_ENGINE)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.9.22 -> 0.9.23 (S_POLICY_ENGINE)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.9.23 -> 0.9.24 (S_POLICY_ENGINE)
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/158, version=0.9.24): ok (rc=0)
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='master-p_Device_drive:0'] does not exist
Sep 10 15:24:15 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update -22 for master-p_Device_drive:0=(null) passed
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:24:15 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-probe_complete" name="probe_complete" value="true" />
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:24:15 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-pingd" name="pingd" value="100" />
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: info: unpack_config: Startup probes: enabled
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH timeout: 60000
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH of failed nodes is disabled
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Stop all active resources: false
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Default stickiness: 0
Sep 10 15:24:15 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 65 for probe_complete=true passed
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: notice: unpack_config: On loss of CCM Quorum: Ignore
Sep 10 15:24:15 Cluster-Server-2 attrd: [40195]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Sep 10 15:24:15 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_Device_drive:1 (10000)
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: info: unpack_domains: Unpacking domains
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke_callback: Invoking the PE: query=159, ref=pe_calc-dc-1347283455-89, seq=312, quorate=1
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.9.24 -> 0.9.25 (S_POLICY_ENGINE)
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-1 is online
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-2 is online
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: LVM_drive_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource LVM_drive active on Cluster-Server-1
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_Device_drive:0_last_failure_0 on Cluster-Server-1 returned 8 (master) instead of the expected value: 7 (not running)
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Operation monitor found resource p_Device_drive:0 active in master mode on Cluster-Server-1
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_Device_drive:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: NFS_Server [p_NFS_Server]
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.9.25 -> 0.9.26 (S_POLICY_ENGINE)
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: iSCSI_Daemon [p_iSCSI_Daemon]
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: PingD [p_PingD]
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='master-p_Device_drive:1'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[3])
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:24:15 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-master-p_Device_drive.1" name="master-p_Device_drive:1" value="10000" />
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:24:15 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: info: clone_print:  Master/Slave Set: Device_drive [p_Device_drive]
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:0 active on Cluster-Server-1
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:0 active on Cluster-Server-1
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: info: short_print:      Masters: [ Cluster-Server-1 ]
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: info: short_print:      Slaves: [ Cluster-Server-2 ]
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: info: native_print: LVM_drive	(ocf::nas:LVM2):	Started Cluster-Server-1
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: iSCSI_iscsi1
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: info: native_print:      Target_iscsi1	(ocf::nas:SCSTTarget):	Started Cluster-Server-1
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: info: native_print:      Lun_iscsi1	(ocf::nas:SCSTLun):	Started Cluster-Server-1
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: iSCSI_iscsi2
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: info: native_print:      Target_iscsi2	(ocf::nas:SCSTTarget):	Stopped 
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: info: native_print:      Lun_iscsi2	(ocf::nas:SCSTLun):	Stopped 
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:0: preferring current location (node=Cluster-Server-1, weight=1)
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:1: preferring current location (node=Cluster-Server-2, weight=1)
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_NFS_Server:0
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_NFS_Server:1
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 NFS_Server instances of a possible 2
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.9.26 -> 0.9.27 (S_POLICY_ENGINE)
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_iSCSI_Daemon:0
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_iSCSI_Daemon:1
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 iSCSI_Daemon instances of a possible 2
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_PingD:0
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_PingD:1
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 PingD instances of a possible 2
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_Device_drive:0
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_Device_drive:1
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 Device_drive instances of a possible 2
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:0 master score: 10150
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: info: master_color: Promoting p_Device_drive:0 (Master Cluster-Server-1)
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:1 master score: 10100
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: info: master_color: Device_drive: Promoted 1 instances of a possible 1 to master
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to LVM_drive
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Target_iscsi1
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Lun_iscsi1
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Target_iscsi2
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Lun_iscsi2
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: native_create_probe: Probing Target_iscsi2 on Cluster-Server-1 (Stopped)
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: native_create_probe: Probing Lun_iscsi2 on Cluster-Server-1 (Stopped)
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: native_create_probe: Probing Target_iscsi2 on Cluster-Server-2 (Stopped)
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: native_create_probe: Probing Lun_iscsi2 on Cluster-Server-2 (Stopped)
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: debug: master_create_actions: Creating actions for Device_drive
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:24:15 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-probe_complete" name="probe_complete" value="true" />
Sep 10 15:24:15 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: pingd (100)
Sep 10 15:24:15 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 67 for pingd=100 passed
Sep 10 15:24:15 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 69 for master-p_Device_drive:1=10000 passed
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: info: RecurringOp:  Start recurring monitor (10s) for Target_iscsi2 on Cluster-Server-1
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: info: RecurringOp:  Start recurring monitor (10s) for Lun_iscsi2 on Cluster-Server-1
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:0	(Started Cluster-Server-1)
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:1	(Started Cluster-Server-2)
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:0	(Started Cluster-Server-1)
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:1	(Started Cluster-Server-2)
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:0	(Started Cluster-Server-1)
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:1	(Started Cluster-Server-2)
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_Device_drive:0	(Master Cluster-Server-1)
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_Device_drive:1	(Slave Cluster-Server-2)
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   LVM_drive	(Started Cluster-Server-1)
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Target_iscsi1	(Started Cluster-Server-1)
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Lun_iscsi1	(Started Cluster-Server-1)
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: notice: LogActions: Start   Target_iscsi2	(Cluster-Server-1)
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: notice: LogActions: Start   Lun_iscsi2	(Cluster-Server-1)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.9.27 -> 0.9.28 (S_POLICY_ENGINE)
Sep 10 15:24:15 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:24:15 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-pingd" name="pingd" value="100" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_PE_SUCCESS from handle_response() received in state S_POLICY_ENGINE
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log <create_reply_adv origin="process_pe_message" t="crmd" version="3.0.6" subt="response" reference="pe_calc-dc-1347283455-89" crm_task="pe_calc" crm_sys_to="dc" crm_sys_from="pengine" crm-tgraph-in="/var/lib/pengine/pe-input-8.bz2" graph-errors="0" graph-warnings="0" config-errors="0" config-warnings="0" >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   <crm_xml >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log     <transition_graph cluster-delay="60s" stonith-timeout="60s" failed-stop-offset="INFINITY" failed-start-offset="INFINITY" batch-limit="30" transition_id="8" >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="0" >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="89" operation="running" operation_key="iSCSI_iscsi2_running_0" >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:24:15 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 71 for probe_complete=true passed
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="84" operation="start" operation_key="Target_iscsi2_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="86" operation="start" operation_key="Lun_iscsi2_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="88" operation="start" operation_key="iSCSI_iscsi2_start_0" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="1" >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="88" operation="start" operation_key="iSCSI_iscsi2_start_0" >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="2" >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="85" operation="monitor" operation_key="Target_iscsi2_monitor_10000" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="Target_iscsi2" long-id="iSCSI_iscsi2:Target_iscsi2" class="ocf" provider="nas" type="SCSTTarget" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_interval="10000" CRM_meta_name="monitor" CRM_meta_timeout="60000" crm_feature_set="3.0.6" iqn="iqn.2005-07.com.example:vdisk.iscsi2" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="84" operation="start" operation_key="Target_iscsi2_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="3" >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="84" operation="start" operation_key="Target_iscsi2_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="Target_iscsi2" long-id="iSCSI_iscsi2:Target_iscsi2" class="ocf" provider="nas" type="SCSTTarget" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_name="start" CRM_meta_timeout="240000" crm_feature_set="3.0.6" iqn="iqn.2005-07.com.example:vdisk.iscsi2" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="12" operation="probe_complete" operation_key="probe_complete" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="88" operation="start" operation_key="iSCSI_iscsi2_start_0" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="4" >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="17" operation="monitor" operation_key="Target_iscsi2_monitor_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="Target_iscsi2" long-id="iSCSI_iscsi2:Target_iscsi2" class="ocf" provider="nas" type="SCSTTarget" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" crm_feature_set="3.0.6" iqn="iqn.2005-07.com.example:vdisk.iscsi2" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="5" >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="14" operation="monitor" operation_key="Target_iscsi2_monitor_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="Target_iscsi2" long-id="iSCSI_iscsi2:Target_iscsi2" class="ocf" provider="nas" type="SCSTTarget" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" crm_feature_set="3.0.6" iqn="iqn.2005-07.com.example:vdisk.iscsi2" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="6" >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="87" operation="monitor" operation_key="Lun_iscsi2_monitor_10000" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="Lun_iscsi2" long-id="iSCSI_iscsi2:Lun_iscsi2" class="ocf" provider="nas" type="SCSTLun" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_interval="10000" CRM_meta_name="monitor" CRM_meta_timeout="20000" crm_feature_set="3.0.6" device_name="iscsi2" handler="vdisk_blockio" lun="0" path="/dev/drive-CSD/iscsi2_iSCSI" target_iqn="iqn.2005-07.com.example:vdisk.iscsi2" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="86" operation="start" operation_key="Lun_iscsi2_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="7" >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="86" operation="start" operation_key="Lun_iscsi2_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="Lun_iscsi2" long-id="iSCSI_iscsi2:Lun_iscsi2" class="ocf" provider="nas" type="SCSTLun" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_name="start" CRM_meta_timeout="60000" crm_feature_set="3.0.6" device_name="iscsi2" handler="vdisk_blockio" lun="0" path="/dev/drive-CSD/iscsi2_iSCSI" target_iqn="iqn.2005-07.com.example:vdisk.iscsi2" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="12" operation="probe_complete" operation_key="probe_complete" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="84" operation="start" operation_key="Target_iscsi2_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="88" operation="start" operation_key="iSCSI_iscsi2_start_0" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="8" >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="18" operation="monitor" operation_key="Lun_iscsi2_monitor_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="Lun_iscsi2" long-id="iSCSI_iscsi2:Lun_iscsi2" class="ocf" provider="nas" type="SCSTLun" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" crm_feature_set="3.0.6" device_name="iscsi2" handler="vdisk_blockio" lun="0" path="/dev/drive-CSD/iscsi2_iSCSI" target_iqn="iqn.2005-07.com.example:vdisk.iscsi2" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="9" >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="15" operation="monitor" operation_key="Lun_iscsi2_monitor_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="Lun_iscsi2" long-id="iSCSI_iscsi2:Lun_iscsi2" class="ocf" provider="nas" type="SCSTLun" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" crm_feature_set="3.0.6" device_name="iscsi2" handler="vdisk_blockio" lun="0" path="/dev/drive-CSD/iscsi2_iSCSI" target_iqn="iqn.2005-07.com.example:vdisk.iscsi2" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="10" priority="1000000" >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="16" operation="probe_complete" operation_key="probe_complete" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_no_wait="true" crm_feature_set="3.0.6" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="17" operation="monitor" operation_key="Target_iscsi2_monitor_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="18" operation="monitor" operation_key="Lun_iscsi2_monitor_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="11" priority="1000000" >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="13" operation="probe_complete" operation_key="probe_complete" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_no_wait="true" crm_feature_set="3.0.6" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="14" operation="monitor" operation_key="Target_iscsi2_monitor_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="15" operation="monitor" operation_key="Lun_iscsi2_monitor_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="12" >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="12" operation="probe_complete" operation_key="probe_complete" >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes crm_feature_set="3.0.6" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="13" operation="probe_complete" operation_key="probe_complete" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="16" operation="probe_complete" operation_key="probe_complete" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" />
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log     </transition_graph>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   </crm_xml>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log </create_reply_adv>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_INVOKE
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: unpack_graph: Unpacked transition 8: 13 actions in 13 synapses
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: info: do_te_invoke: Processing graph 8 (ref=pe_calc-dc-1347283455-89) derived from /var/lib/pengine/pe-input-8.bz2
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: te_pseudo_action: Pseudo action 88 fired and confirmed
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 17: monitor Target_iscsi2_monitor_0 on Cluster-Server-2 (local)
Sep 10 15:24:15 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 73 for pingd=100 passed
Sep 10 15:24:15 Cluster-Server-2 lrmd: [40194]: debug: on_msg_add_rsc:client [40197] adds resource Target_iscsi2
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_lrm_rsc_op: Performing key=17:8:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=Target_iscsi2_monitor_0
Sep 10 15:24:15 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op:2399: copying parameters for rsc Target_iscsi2
Sep 10 15:24:15 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op: add an operation operation monitor[14] on Target_iscsi2 for client 40197, its parameters: crm_feature_set=[3.0.6] iqn=[iqn.2005-07.com.example:vdisk.iscsi2] CRM_meta_timeout=[20000]  to the operation list.
Sep 10 15:24:15 Cluster-Server-2 pengine: [40196]: notice: process_pe_message: Transition 8: PEngine Input stored in: /var/lib/pengine/pe-input-8.bz2
Sep 10 15:24:15 Cluster-Server-2 lrmd: [40194]: info: rsc:Target_iscsi2 probe[14] (pid 55213)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 14: monitor Target_iscsi2_monitor_0 on Cluster-Server-1
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 18: monitor Lun_iscsi2_monitor_0 on Cluster-Server-2 (local)
Sep 10 15:24:15 Cluster-Server-2 lrmd: [40194]: debug: on_msg_add_rsc:client [40197] adds resource Lun_iscsi2
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_lrm_rsc_op: Performing key=18:8:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=Lun_iscsi2_monitor_0
Sep 10 15:24:15 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op:2399: copying parameters for rsc Lun_iscsi2
Sep 10 15:24:15 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op: add an operation operation monitor[15] on Lun_iscsi2 for client 40197, its parameters: path=[/dev/drive-CSD/iscsi2_iSCSI] crm_feature_set=[3.0.6] lun=[0] handler=[vdisk_blockio] device_name=[iscsi2] CRM_meta_timeout=[20000] target_iqn=[iqn.2005-07.com.example:vdisk.iscsi2]  to the operation list.
Sep 10 15:24:15 Cluster-Server-2 lrmd: [40194]: info: rsc:Lun_iscsi2 probe[15] (pid 55214)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 15: monitor Lun_iscsi2_monitor_0 on Cluster-Server-1
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 8 (Complete=0, Pending=4, Fired=5, Skipped=0, Incomplete=8, Source=/var/lib/pengine/pe-input-8.bz2): In-progress
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.9.28 -> 0.9.29 (S_TRANSITION_ENGINE)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 8 (Complete=1, Pending=4, Fired=0, Skipped=0, Incomplete=8, Source=/var/lib/pengine/pe-input-8.bz2): In-progress
SCSTTarget(Target_iscsi2)[55213]:	2012/09/10_15:24:15 DEBUG: Target_iscsi2 monitor : 7
Sep 10 15:24:15 Cluster-Server-2 lrmd: [40194]: WARN: Managed Target_iscsi2:monitor process 55213 exited with return code 7.
Sep 10 15:24:15 Cluster-Server-2 lrmd: [40194]: info: operation monitor[14] on Target_iscsi2 for client 40197: pid 55213 exited with return code 7
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: do_update_resource: Updating resouce Target_iscsi2 after complete monitor op (interval=0)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: info: process_lrm_event: LRM operation Target_iscsi2_monitor_0 (call=14, rc=7, cib-update=160, confirmed=true) not running
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: update_history_cache: Appending monitor op to history for 'Target_iscsi2'
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.9.29 -> 0.9.30 (S_TRANSITION_ENGINE)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action Target_iscsi2_monitor_0 (17) confirmed on Cluster-Server-2 (rc=0)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 8 (Complete=2, Pending=3, Fired=0, Skipped=0, Incomplete=8, Source=/var/lib/pengine/pe-input-8.bz2): In-progress
SCSTLun(Lun_iscsi2)[55214]:	2012/09/10_15:24:15 INFO: Lun_iscsi2 monitor : 7
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.9.30 -> 0.9.31 (S_TRANSITION_ENGINE)
SCSTLun(Lun_iscsi2)[55214]:	2012/09/10_15:24:15 INFO: Lun_iscsi2 monitor : 7
Sep 10 15:24:15 Cluster-Server-2 lrmd: [40194]: WARN: Managed Lun_iscsi2:monitor process 55214 exited with return code 7.
Sep 10 15:24:15 Cluster-Server-2 lrmd: [40194]: info: operation monitor[15] on Lun_iscsi2 for client 40197: pid 55214 exited with return code 7
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: do_update_resource: Updating resouce Lun_iscsi2 after complete monitor op (interval=0)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: info: process_lrm_event: LRM operation Lun_iscsi2_monitor_0 (call=15, rc=7, cib-update=161, confirmed=true) not running
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: update_history_cache: Appending monitor op to history for 'Lun_iscsi2'
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.9.31 -> 0.9.32 (S_TRANSITION_ENGINE)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.9.32 -> 0.9.33 (S_TRANSITION_ENGINE)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action Lun_iscsi2_monitor_0 (18) confirmed on Cluster-Server-2 (rc=0)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 16: probe_complete probe_complete on Cluster-Server-2 (local) - no waiting
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: attrd_update_delegate: Sent update: probe_complete=true for localhost
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 8 (Complete=3, Pending=2, Fired=1, Skipped=0, Incomplete=7, Source=/var/lib/pengine/pe-input-8.bz2): In-progress
Sep 10 15:24:15 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from crmd: probe_complete=true
Sep 10 15:24:15 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: true, Current: true, Stored: true
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 8 (Complete=4, Pending=2, Fired=0, Skipped=0, Incomplete=7, Source=/var/lib/pengine/pe-input-8.bz2): In-progress
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.9.33 -> 0.9.34 (S_TRANSITION_ENGINE)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action Target_iscsi2_monitor_0 (14) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 8 (Complete=5, Pending=1, Fired=0, Skipped=0, Incomplete=7, Source=/var/lib/pengine/pe-input-8.bz2): In-progress
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.9.34 -> 0.9.35 (S_TRANSITION_ENGINE)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action Lun_iscsi2_monitor_0 (15) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 13: probe_complete probe_complete on Cluster-Server-1 - no waiting
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: te_pseudo_action: Pseudo action 12 fired and confirmed
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 8 (Complete=6, Pending=0, Fired=2, Skipped=0, Incomplete=5, Source=/var/lib/pengine/pe-input-8.bz2): In-progress
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 84: start Target_iscsi2_start_0 on Cluster-Server-1
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 8 (Complete=8, Pending=1, Fired=1, Skipped=0, Incomplete=4, Source=/var/lib/pengine/pe-input-8.bz2): In-progress
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.9.35 -> 0.9.36 (S_TRANSITION_ENGINE)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action Target_iscsi2_start_0 (84) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 85: monitor Target_iscsi2_monitor_10000 on Cluster-Server-1
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 86: start Lun_iscsi2_start_0 on Cluster-Server-1
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 8 (Complete=9, Pending=2, Fired=2, Skipped=0, Incomplete=2, Source=/var/lib/pengine/pe-input-8.bz2): In-progress
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.9.36 -> 0.9.37 (S_TRANSITION_ENGINE)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action Target_iscsi2_monitor_10000 (85) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 8 (Complete=10, Pending=1, Fired=0, Skipped=0, Incomplete=2, Source=/var/lib/pengine/pe-input-8.bz2): In-progress
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.9.37 -> 0.9.38 (S_TRANSITION_ENGINE)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action Lun_iscsi2_start_0 (86) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: te_pseudo_action: Pseudo action 89 fired and confirmed
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 87: monitor Lun_iscsi2_monitor_10000 on Cluster-Server-1
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 8 (Complete=11, Pending=1, Fired=2, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-8.bz2): In-progress
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 8 (Complete=12, Pending=1, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-8.bz2): In-progress
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.9.38 -> 0.9.39 (S_TRANSITION_ENGINE)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action Lun_iscsi2_monitor_10000 (87) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: notice: run_graph: ==== Transition 8 (Complete=13, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-8.bz2): Complete
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: te_graph_trigger: Transition 8 is now complete
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Transition 8 status: done - <null>
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_TE_SUCCESS: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_TE_SUCCESS from notify_crmd() received in state S_TRANSITION_ENGINE
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: Starting PEngine Recheck Timer
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started PEngine Recheck Timer (I_PE_CALC:900000ms), src=197
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:24:15 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:24:15 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_iSCSI_Daemon:1 monitor[5] (pid 55233)
Sep 10 15:24:15 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_NFS_Server:1 monitor[8] (pid 55234)
Sep 10 15:24:15 Cluster-Server-2 lrmd: [40194]: info: operation monitor[8] on p_NFS_Server:1 for client 40197: pid 55234 exited with return code 0
Sep 10 15:24:15 Cluster-Server-2 lrmd: [40194]: debug: RA output: (p_NFS_Server:1:monitor:stdout) nfsd running

Sep 10 15:24:15 Cluster-Server-2 lrmd: [40194]: info: operation monitor[5] on p_iSCSI_Daemon:1 for client 40197: pid 55233 exited with return code 0
Sep 10 15:24:18 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_Device_drive:0 monitor[11] (pid 53462)
drbd(p_Device_drive:0)[53462]:	2012/09/10_15:24:18 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:24:18 Cluster-Server-1 crm_attribute: [53492]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:24:18 Cluster-Server-1 crm_attribute: [53492]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:24:18 Cluster-Server-1 crm_attribute: [53492]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:24:18 Cluster-Server-1 crm_attribute: [53492]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:24:18 Cluster-Server-1 crm_attribute: [53492]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:24:18 Cluster-Server-1 crm_attribute: [53492]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:24:18 Cluster-Server-1 crm_attribute: [53492]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:24:18 Cluster-Server-1 crm_attribute: [53492]: info: determine_host: Mapped Cluster-Server-1 to Cluster-Server-1
Sep 10 15:24:18 Cluster-Server-1 crm_attribute: [53492]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:24:18 Cluster-Server-1 crm_attribute: [53492]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:24:18 Cluster-Server-1 crm_attribute: [53492]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:24:18 Cluster-Server-1 crm_attribute: [53492]: info: attrd_lazy_update: Updated master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:24:18 Cluster-Server-1 crm_attribute: [53492]: info: main: Update master-p_Device_drive:0=10000 sent via attrd
Sep 10 15:24:18 Cluster-Server-1 crm_attribute: [53492]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:24:18 Cluster-Server-1 crm_attribute: [53492]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:24:18 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:0=10000
Sep 10 15:24:18 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:0)[53462]:	2012/09/10_15:24:18 DEBUG: drive: Exit code 0
drbd(p_Device_drive:0)[53462]:	2012/09/10_15:24:18 DEBUG: drive: Command output: 
Sep 10 15:24:18 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_Device_drive:0:monitor:stdout) 

Sep 10 15:24:18 Cluster-Server-1 lrmd: [48712]: info: operation monitor[11] on p_Device_drive:0 for client 48715: pid 53462 exited with return code 8
Sep 10 15:24:21 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_PingD:0 monitor[6] (pid 53665)
Sep 10 15:24:21 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_PingD:1 monitor[6] (pid 55702)
Sep 10 15:24:23 Cluster-Server-1 lrmd: [48712]: debug: rsc:Target_iscsi1 monitor[15] (pid 53681)
SCSTTarget(Target_iscsi1)[53681]:	2012/09/10_15:24:23 DEBUG: Target_iscsi1 monitor : 0
Sep 10 15:24:23 Cluster-Server-1 lrmd: [48712]: info: operation monitor[15] on Target_iscsi1 for client 48715: pid 53681 exited with return code 0
Sep 10 15:24:23 Cluster-Server-1 lrmd: [48712]: debug: rsc:Lun_iscsi1 monitor[17] (pid 53687)
SCSTLun(Lun_iscsi1)[53687]:	2012/09/10_15:24:23 INFO: Lun_iscsi1 monitor : 0
SCSTLun(Lun_iscsi1)[53687]:	2012/09/10_15:24:23 INFO: Lun_iscsi1 monitor : 0
Sep 10 15:24:23 Cluster-Server-1 lrmd: [48712]: info: operation monitor[17] on Lun_iscsi1 for client 48715: pid 53687 exited with return code 0
Sep 10 15:24:23 Cluster-Server-1 attrd_updater: [53697]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:24:23 Cluster-Server-1 attrd_updater: [53697]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:24:23 Cluster-Server-1 attrd_updater: [53697]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:24:23 Cluster-Server-1 attrd_updater: [53697]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:24:23 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:24:23 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:24:23 Cluster-Server-1 lrmd: [48712]: info: operation monitor[6] on p_PingD:0 for client 48715: pid 53665 exited with return code 0
Sep 10 15:24:23 Cluster-Server-2 attrd_updater: [55753]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:24:23 Cluster-Server-2 attrd_updater: [55753]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:24:23 Cluster-Server-2 attrd_updater: [55753]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:24:23 Cluster-Server-2 attrd_updater: [55753]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:24:23 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:24:23 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:24:23 Cluster-Server-2 lrmd: [40194]: info: operation monitor[6] on p_PingD:1 for client 40197: pid 55702 exited with return code 0
Sep 10 15:24:25 Cluster-Server-1 lrmd: [48712]: debug: rsc:Target_iscsi2 monitor[21] (pid 53700)
SCSTTarget(Target_iscsi2)[53700]:	2012/09/10_15:24:25 DEBUG: Target_iscsi2 monitor : 0
Sep 10 15:24:25 Cluster-Server-1 lrmd: [48712]: info: operation monitor[21] on Target_iscsi2 for client 48715: pid 53700 exited with return code 0
Sep 10 15:24:25 Cluster-Server-1 lrmd: [48712]: debug: rsc:Lun_iscsi2 monitor[23] (pid 53706)
SCSTLun(Lun_iscsi2)[53706]:	2012/09/10_15:24:25 INFO: Lun_iscsi2 monitor : 0
SCSTLun(Lun_iscsi2)[53706]:	2012/09/10_15:24:25 INFO: Lun_iscsi2 monitor : 0
Sep 10 15:24:25 Cluster-Server-1 lrmd: [48712]: info: operation monitor[23] on Lun_iscsi2 for client 48715: pid 53706 exited with return code 0
Sep 10 15:24:27 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_Device_drive:1 monitor[11] (pid 56142)
drbd(p_Device_drive:1)[56142]:	2012/09/10_15:24:27 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:24:27 Cluster-Server-2 crm_attribute: [56172]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:24:27 Cluster-Server-2 crm_attribute: [56172]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:24:27 Cluster-Server-2 crm_attribute: [56172]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:24:27 Cluster-Server-2 crm_attribute: [56172]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:24:27 Cluster-Server-2 crm_attribute: [56172]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:24:27 Cluster-Server-2 crm_attribute: [56172]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:24:27 Cluster-Server-2 crm_attribute: [56172]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:24:27 Cluster-Server-2 crm_attribute: [56172]: info: determine_host: Mapped Cluster-Server-2 to Cluster-Server-2
Sep 10 15:24:27 Cluster-Server-2 crm_attribute: [56172]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:24:27 Cluster-Server-2 crm_attribute: [56172]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:24:27 Cluster-Server-2 crm_attribute: [56172]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:1=10000 for Cluster-Server-2
Sep 10 15:24:27 Cluster-Server-2 crm_attribute: [56172]: info: attrd_lazy_update: Updated master-p_Device_drive:1=10000 for Cluster-Server-2
Sep 10 15:24:27 Cluster-Server-2 crm_attribute: [56172]: info: main: Update master-p_Device_drive:1=10000 sent via attrd
Sep 10 15:24:27 Cluster-Server-2 crm_attribute: [56172]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:24:27 Cluster-Server-2 crm_attribute: [56172]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:24:27 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:1=10000
Sep 10 15:24:27 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:1)[56142]:	2012/09/10_15:24:27 DEBUG: drive: Exit code 0
drbd(p_Device_drive:1)[56142]:	2012/09/10_15:24:27 DEBUG: drive: Command output: 
Sep 10 15:24:27 Cluster-Server-2 lrmd: [40194]: debug: RA output: (p_Device_drive:1:monitor:stdout) 

Sep 10 15:24:27 Cluster-Server-2 lrmd: [40194]: info: operation monitor[11] on p_Device_drive:1 for client 40197: pid 56142 exited with return code 0
Sep 10 15:24:28 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_Device_drive:0 monitor[11] (pid 53714)
drbd(p_Device_drive:0)[53714]:	2012/09/10_15:24:28 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:24:28 Cluster-Server-1 crm_attribute: [53744]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:24:28 Cluster-Server-1 crm_attribute: [53744]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:24:28 Cluster-Server-1 crm_attribute: [53744]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:24:28 Cluster-Server-1 crm_attribute: [53744]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:24:28 Cluster-Server-1 crm_attribute: [53744]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:24:28 Cluster-Server-1 crm_attribute: [53744]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:24:28 Cluster-Server-1 crm_attribute: [53744]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:24:28 Cluster-Server-1 crm_attribute: [53744]: info: determine_host: Mapped Cluster-Server-1 to Cluster-Server-1
Sep 10 15:24:28 Cluster-Server-1 crm_attribute: [53744]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:24:28 Cluster-Server-1 crm_attribute: [53744]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:24:28 Cluster-Server-1 crm_attribute: [53744]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:24:28 Cluster-Server-1 crm_attribute: [53744]: info: attrd_lazy_update: Updated master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:24:28 Cluster-Server-1 crm_attribute: [53744]: info: main: Update master-p_Device_drive:0=10000 sent via attrd
Sep 10 15:24:28 Cluster-Server-1 crm_attribute: [53744]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:24:28 Cluster-Server-1 crm_attribute: [53744]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:24:28 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:0=10000
Sep 10 15:24:28 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:0)[53714]:	2012/09/10_15:24:28 DEBUG: drive: Exit code 0
drbd(p_Device_drive:0)[53714]:	2012/09/10_15:24:28 DEBUG: drive: Command output: 
Sep 10 15:24:28 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_Device_drive:0:monitor:stdout) 

Sep 10 15:24:28 Cluster-Server-1 lrmd: [48712]: info: operation monitor[11] on p_Device_drive:0 for client 48715: pid 53714 exited with return code 8
Sep 10 15:24:33 Cluster-Server-1 lrmd: [48712]: debug: rsc:Target_iscsi1 monitor[15] (pid 54125)
SCSTTarget(Target_iscsi1)[54125]:	2012/09/10_15:24:33 DEBUG: Target_iscsi1 monitor : 0
Sep 10 15:24:33 Cluster-Server-1 lrmd: [48712]: info: operation monitor[15] on Target_iscsi1 for client 48715: pid 54125 exited with return code 0
Sep 10 15:24:33 Cluster-Server-1 lrmd: [48712]: debug: rsc:Lun_iscsi1 monitor[17] (pid 54140)
SCSTLun(Lun_iscsi1)[54140]:	2012/09/10_15:24:33 INFO: Lun_iscsi1 monitor : 0
SCSTLun(Lun_iscsi1)[54140]:	2012/09/10_15:24:33 INFO: Lun_iscsi1 monitor : 0
Sep 10 15:24:33 Cluster-Server-1 lrmd: [48712]: info: operation monitor[17] on Lun_iscsi1 for client 48715: pid 54140 exited with return code 0
Sep 10 15:24:33 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_PingD:0 monitor[6] (pid 54161)
Sep 10 15:24:33 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_PingD:1 monitor[6] (pid 56520)
Sep 10 15:24:35 Cluster-Server-1 lrmd: [48712]: debug: rsc:Target_iscsi2 monitor[21] (pid 54211)
SCSTTarget(Target_iscsi2)[54211]:	2012/09/10_15:24:35 DEBUG: Target_iscsi2 monitor : 0
Sep 10 15:24:35 Cluster-Server-1 lrmd: [48712]: info: operation monitor[21] on Target_iscsi2 for client 48715: pid 54211 exited with return code 0
Sep 10 15:24:35 Cluster-Server-1 lrmd: [48712]: debug: rsc:Lun_iscsi2 monitor[23] (pid 54217)
SCSTLun(Lun_iscsi2)[54217]:	2012/09/10_15:24:35 INFO: Lun_iscsi2 monitor : 0
SCSTLun(Lun_iscsi2)[54217]:	2012/09/10_15:24:35 INFO: Lun_iscsi2 monitor : 0
Sep 10 15:24:35 Cluster-Server-1 lrmd: [48712]: info: operation monitor[23] on Lun_iscsi2 for client 48715: pid 54217 exited with return code 0
Sep 10 15:24:35 Cluster-Server-1 attrd_updater: [54380]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:24:35 Cluster-Server-1 attrd_updater: [54380]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:24:35 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:24:35 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:24:35 Cluster-Server-1 attrd_updater: [54380]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:24:35 Cluster-Server-1 attrd_updater: [54380]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:24:35 Cluster-Server-1 lrmd: [48712]: info: operation monitor[6] on p_PingD:0 for client 48715: pid 54161 exited with return code 0
Sep 10 15:24:35 Cluster-Server-2 attrd_updater: [56954]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:24:35 Cluster-Server-2 attrd_updater: [56954]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:24:35 Cluster-Server-2 attrd_updater: [56954]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:24:35 Cluster-Server-2 attrd_updater: [56954]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:24:35 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:24:35 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:24:35 Cluster-Server-2 lrmd: [40194]: info: operation monitor[6] on p_PingD:1 for client 40197: pid 56520 exited with return code 0
Sep 10 15:24:38 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_Device_drive:0 monitor[11] (pid 54461)
drbd(p_Device_drive:0)[54461]:	2012/09/10_15:24:38 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:24:38 Cluster-Server-1 crm_attribute: [54491]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:24:38 Cluster-Server-1 crm_attribute: [54491]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:24:38 Cluster-Server-1 crm_attribute: [54491]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:24:38 Cluster-Server-1 crm_attribute: [54491]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:24:38 Cluster-Server-1 crm_attribute: [54491]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:24:38 Cluster-Server-1 crm_attribute: [54491]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:24:38 Cluster-Server-1 crm_attribute: [54491]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:24:38 Cluster-Server-1 crm_attribute: [54491]: info: determine_host: Mapped Cluster-Server-1 to Cluster-Server-1
Sep 10 15:24:38 Cluster-Server-1 crm_attribute: [54491]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:24:38 Cluster-Server-1 crm_attribute: [54491]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:24:38 Cluster-Server-1 crm_attribute: [54491]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:24:38 Cluster-Server-1 crm_attribute: [54491]: info: attrd_lazy_update: Updated master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:24:38 Cluster-Server-1 crm_attribute: [54491]: info: main: Update master-p_Device_drive:0=10000 sent via attrd
Sep 10 15:24:38 Cluster-Server-1 crm_attribute: [54491]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:24:38 Cluster-Server-1 crm_attribute: [54491]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:24:38 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:0=10000
Sep 10 15:24:38 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:0)[54461]:	2012/09/10_15:24:38 DEBUG: drive: Exit code 0
drbd(p_Device_drive:0)[54461]:	2012/09/10_15:24:38 DEBUG: drive: Command output: 
Sep 10 15:24:38 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_Device_drive:0:monitor:stdout) 

Sep 10 15:24:38 Cluster-Server-1 lrmd: [48712]: info: operation monitor[11] on p_Device_drive:0 for client 48715: pid 54461 exited with return code 8
Sep 10 15:24:39 Cluster-Server-1 cib: [48709]: info: apply_xml_diff: Digest mis-match: expected a6b655d2314ddd0bdfa855f4ecc7ad93, calculated ddc6c83507ced705b25cf2f514760e7c
Sep 10 15:24:39 Cluster-Server-1 cib: [48709]: notice: cib_process_diff: Diff 0.9.39 -> 0.10.1 not applied to 0.9.39: Failed application of an update diff
Sep 10 15:24:39 Cluster-Server-2 cib: [40192]: debug: activateCibXml: Triggering CIB write for cib_replace op
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_replace): 0.9.39 -> 0.10.1 (S_IDLE)
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.10.1) : Non-status change
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause <diff crm_feature_set="3.0.6" >
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   <diff-removed admin_epoch="0" epoch="9" num_updates="39" >
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     <cib admin_epoch="0" epoch="9" num_updates="39" >
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       <configuration >
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         <resources >
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <group id="iSCSI_iscsi1" >
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <primitive id="Target_iscsi1" >
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <meta_attributes id="Target_iscsi1-meta_attributes" >
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="Target_iscsi1-meta_attributes-target-role" name="target-role" value="Started" __crm_diff_marker__="removed:top" />
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </meta_attributes>
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             </primitive>
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <primitive id="Lun_iscsi1" >
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <meta_attributes id="Lun_iscsi1-meta_attributes" >
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="Lun_iscsi1-meta_attributes-target-role" name="target-role" value="Started" __crm_diff_marker__="removed:top" />
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </meta_attributes>
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             </primitive>
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           </group>
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         </resources>
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       </configuration>
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     </cib>
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   </diff-removed>
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   <diff-added >
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     <cib epoch="10" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="Cluster-Server-1" update-client="cibadmin" cib-last-written="Mon Sep 10 15:24:15 2012" have-quorum="1" dc-uuid="Cluster-Server-2" >
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       <configuration >
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         <resources >
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <group id="iSCSI_iscsi1" >
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <meta_attributes id="iSCSI_iscsi1-meta_attributes" __crm_diff_marker__="added:top" >
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <nvpair id="iSCSI_iscsi1-meta_attributes-target-role" name="target-role" value="Stopped" />
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             </meta_attributes>
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           </group>
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         </resources>
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       </configuration>
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     </cib>
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   </diff-added>
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause </diff>
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_IDLE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes are eligible to run resources.
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 162: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:24:39 Cluster-Server-2 cib: [40192]: info: cib:diff: - <cib admin_epoch="0" epoch="9" num_updates="39" >
Sep 10 15:24:39 Cluster-Server-2 cib: [40192]: info: cib:diff: -   <configuration >
Sep 10 15:24:39 Cluster-Server-2 cib: [40192]: info: cib:diff: -     <resources >
Sep 10 15:24:39 Cluster-Server-2 cib: [40192]: info: cib:diff: -       <group id="iSCSI_iscsi1" >
Sep 10 15:24:39 Cluster-Server-2 cib: [40192]: info: cib:diff: -         <primitive id="Target_iscsi1" >
Sep 10 15:24:39 Cluster-Server-2 cib: [40192]: info: cib:diff: -           <meta_attributes id="Target_iscsi1-meta_attributes" >
Sep 10 15:24:39 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <nvpair id="Target_iscsi1-meta_attributes-target-role" name="target-role" value="Started" __crm_diff_marker__="removed:top" />
Sep 10 15:24:39 Cluster-Server-2 cib: [40192]: info: cib:diff: -           </meta_attributes>
Sep 10 15:24:39 Cluster-Server-2 cib: [40192]: info: cib:diff: -         </primitive>
Sep 10 15:24:39 Cluster-Server-2 cib: [40192]: info: cib:diff: -         <primitive id="Lun_iscsi1" >
Sep 10 15:24:39 Cluster-Server-2 cib: [40192]: info: cib:diff: -           <meta_attributes id="Lun_iscsi1-meta_attributes" >
Sep 10 15:24:39 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <nvpair id="Lun_iscsi1-meta_attributes-target-role" name="target-role" value="Started" __crm_diff_marker__="removed:top" />
Sep 10 15:24:39 Cluster-Server-2 cib: [40192]: info: cib:diff: -           </meta_attributes>
Sep 10 15:24:39 Cluster-Server-2 cib: [40192]: info: cib:diff: -         </primitive>
Sep 10 15:24:39 Cluster-Server-2 cib: [40192]: info: cib:diff: -       </group>
Sep 10 15:24:39 Cluster-Server-2 cib: [40192]: info: cib:diff: -     </resources>
Sep 10 15:24:39 Cluster-Server-2 cib: [40192]: info: cib:diff: -   </configuration>
Sep 10 15:24:39 Cluster-Server-2 cib: [40192]: info: cib:diff: - </cib>
Sep 10 15:24:39 Cluster-Server-2 cib: [40192]: info: cib:diff: + <cib epoch="10" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="Cluster-Server-1" update-client="cibadmin" cib-last-written="Mon Sep 10 15:24:15 2012" have-quorum="1" dc-uuid="Cluster-Server-2" >
Sep 10 15:24:39 Cluster-Server-2 cib: [40192]: info: cib:diff: +   <configuration >
Sep 10 15:24:39 Cluster-Server-2 cib: [40192]: info: cib:diff: +     <resources >
Sep 10 15:24:39 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <group id="iSCSI_iscsi1" >
Sep 10 15:24:39 Cluster-Server-2 cib: [40192]: info: cib:diff: +         <meta_attributes id="iSCSI_iscsi1-meta_attributes" __crm_diff_marker__="added:top" >
Sep 10 15:24:39 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <nvpair id="iSCSI_iscsi1-meta_attributes-target-role" name="target-role" value="Stopped" />
Sep 10 15:24:39 Cluster-Server-2 cib: [40192]: info: cib:diff: +         </meta_attributes>
Sep 10 15:24:39 Cluster-Server-2 cib: [40192]: info: cib:diff: +       </group>
Sep 10 15:24:39 Cluster-Server-2 cib: [40192]: info: cib:diff: +     </resources>
Sep 10 15:24:39 Cluster-Server-2 cib: [40192]: info: cib:diff: +   </configuration>
Sep 10 15:24:39 Cluster-Server-2 cib: [40192]: info: cib:diff: + </cib>
Sep 10 15:24:39 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_replace for section resources (origin=Cluster-Server-1/cibadmin/2, version=0.10.1): ok (rc=0)
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: info: unpack_config: Startup probes: enabled
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH timeout: 60000
Sep 10 15:24:39 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke_callback: Invoking the PE: query=162, ref=pe_calc-dc-1347283479-100, seq=312, quorate=1
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH of failed nodes is disabled
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Stop all active resources: false
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Default stickiness: 0
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: notice: unpack_config: On loss of CCM Quorum: Ignore
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: info: unpack_domains: Unpacking domains
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-1 is online
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-2 is online
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: LVM_drive_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource LVM_drive active on Cluster-Server-1
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_Device_drive:0_last_failure_0 on Cluster-Server-1 returned 8 (master) instead of the expected value: 7 (not running)
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Operation monitor found resource p_Device_drive:0 active in master mode on Cluster-Server-1
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: debug: unpack_lrm_rsc_state: Target_iscsi1: Overwriting calculated next role Unknown with requested next role Stopped
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: debug: unpack_lrm_rsc_state: Lun_iscsi1: Overwriting calculated next role Unknown with requested next role Stopped
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_Device_drive:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:24:39 Cluster-Server-2 cib: [40192]: debug: sync_our_cib: Syncing CIB to Cluster-Server-1
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: debug: unpack_lrm_rsc_state: Lun_iscsi1: Overwriting calculated next role Unknown with requested next role Stopped
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: debug: unpack_lrm_rsc_state: Target_iscsi1: Overwriting calculated next role Unknown with requested next role Stopped
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: NFS_Server [p_NFS_Server]
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: iSCSI_Daemon [p_iSCSI_Daemon]
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:24:39 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=Cluster-Server-1/Cluster-Server-1/(null), version=0.10.1): ok (rc=0)
Sep 10 15:24:39 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: PingD [p_PingD]
Sep 10 15:24:40 Cluster-Server-1 cib: [48709]: info: cib_server_process_diff: Requesting re-sync from peer
Sep 10 15:24:40 Cluster-Server-1 cib: [48709]: debug: activateCibXml: Triggering CIB write for cib_replace op
Sep 10 15:24:40 Cluster-Server-1 cib: [48709]: info: cib_replace_notify: Replaced: -1.-1.-1 -> 0.10.1 from Cluster-Server-2
Sep 10 15:24:40 Cluster-Server-1 crmd: [48715]: debug: do_cib_replaced: Updating the CIB after a replace: DC=false
Sep 10 15:24:40 Cluster-Server-1 attrd: [48713]: info: do_cib_replaced: Sending full refresh
Sep 10 15:24:40 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_Device_drive:0 (10000)
Sep 10 15:24:40 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='master-p_Device_drive:0'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[3])
Sep 10 15:24:40 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-master-p_Device_drive.0" name="master-p_Device_drive:0" value="10000" />
Sep 10 15:24:40 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Sep 10 15:24:40 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:24:40 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-probe_complete" name="probe_complete" value="true" />
Sep 10 15:24:40 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: pingd (100)
Sep 10 15:24:40 Cluster-Server-1 cib: [48709]: debug: Forking temp process write_cib_contents
Sep 10 15:24:40 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:24:40 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-pingd" name="pingd" value="100" />
Sep 10 15:24:40 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 74 for master-p_Device_drive:0=10000 passed
Sep 10 15:24:40 Cluster-Server-1 cib: [54564]: ERROR: validate_cib_digest: Digest comparision failed: expected 83751b899e758f9b138d060ace084080 (/var/lib/heartbeat/crm/cib.ANRY1Q), calculated bd97ef3df10846e783bd64059be77e45
Sep 10 15:24:40 Cluster-Server-1 cib: [54564]: ERROR: retrieveCib: Checksum of /var/lib/heartbeat/crm/cib.uGGnOm failed!  Configuration contents ignored!
Sep 10 15:24:40 Cluster-Server-1 cib: [54564]: ERROR: retrieveCib: Usually this is caused by manual changes, please refer to http://clusterlabs.org/wiki/FAQ#cib_changes_detected
Sep 10 15:24:40 Cluster-Server-1 cib: [54564]: ERROR: crm_abort: write_cib_contents: Triggered fatal assert at io.c:662 : retrieveCib(tmp1, tmp2, FALSE) != NULL
Sep 10 15:24:40 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 76 for probe_complete=true passed
Sep 10 15:24:40 Cluster-Server-1 crmd: [48715]: debug: cancel_op: Cancelling op 17 for Lun_iscsi1 (Lun_iscsi1:17)
Sep 10 15:24:40 Cluster-Server-1 lrmd: [48712]: info: cancel_op: operation monitor[17] on Lun_iscsi1 for client 48715, its parameters: handler=[vdisk_blockio] CRM_meta_timeout=[20000] CRM_meta_name=[monitor] target_iqn=[iqn.2005-07.com.example:vdisk.iscsi1] path=[/dev/drive-CSD/iscsi1_iSCSI] crm_feature_set=[3.0.6] CRM_meta_interval=[10000] lun=[0] device_name=[iscsi1]  cancelled
Sep 10 15:24:40 Cluster-Server-1 lrmd: [48712]: debug: on_msg_cancel_op: operation 17 cancelled
Sep 10 15:24:40 Cluster-Server-1 crmd: [48715]: debug: cancel_op: Op 17 for Lun_iscsi1 (Lun_iscsi1:17): cancelled
Sep 10 15:24:40 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=75:9:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=Lun_iscsi1_stop_0
Sep 10 15:24:40 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation stop[24] on Lun_iscsi1 for client 48715, its parameters: crm_feature_set=[3.0.6] CRM_meta_name=[stop] CRM_meta_timeout=[240000]  to the operation list.
Sep 10 15:24:40 Cluster-Server-1 lrmd: [48712]: info: rsc:Lun_iscsi1 stop[24] (pid 54566)
Sep 10 15:24:40 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation Lun_iscsi1_monitor_10000 (call=17, status=1, cib-update=0, confirmed=true) Cancelled
Sep 10 15:24:40 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending monitor op to history for 'Lun_iscsi1'
Sep 10 15:24:40 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 78 for pingd=100 passed
Sep 10 15:24:40 Cluster-Server-1 cib: [48709]: WARN: Managed write_cib_contents process 54564 killed by signal 6 [SIGABRT - Abort].
Sep 10 15:24:40 Cluster-Server-1 cib: [48709]: ERROR: Managed write_cib_contents process 54564 dumped core
Sep 10 15:24:40 Cluster-Server-1 cib: [48709]: ERROR: cib_diskwrite_complete: Disk write failed: status=134, signo=6, exitcode=0
Sep 10 15:24:40 Cluster-Server-1 cib: [48709]: ERROR: cib_diskwrite_complete: Disabling disk writes after write failure
SCSTLun(Lun_iscsi1)[54566]:	2012/09/10_15:24:40 INFO: Stopping lun 0 on target iqn.2005-07.com.example:vdisk.iscsi1
SCSTLun(Lun_iscsi1)[54566]:	2012/09/10_15:24:40 INFO: Stopping lun 0 on target iqn.2005-07.com.example:vdisk.iscsi1
SCSTLun(Lun_iscsi1)[54566]:	2012/09/10_15:24:40 INFO: Disabling target iqn.2005-07.com.example:vdisk.iscsi1
SCSTLun(Lun_iscsi1)[54566]:	2012/09/10_15:24:40 INFO: Disabling target iqn.2005-07.com.example:vdisk.iscsi1
SCSTLun(Lun_iscsi1)[54566]:	2012/09/10_15:24:40 INFO: Removing LUN 0, device iscsi1, target iqn.2005-07.com.example:vdisk.iscsi1
SCSTLun(Lun_iscsi1)[54566]:	2012/09/10_15:24:40 INFO: Removing LUN 0, device iscsi1, target iqn.2005-07.com.example:vdisk.iscsi1
SCSTLun(Lun_iscsi1)[54566]:	2012/09/10_15:24:40 INFO: Closing device iscsi1
SCSTLun(Lun_iscsi1)[54566]:	2012/09/10_15:24:40 INFO: Closing device iscsi1
SCSTLun(Lun_iscsi1)[54566]:	2012/09/10_15:24:40 INFO: Enabling target iqn.2005-07.com.example:vdisk.iscsi1
SCSTLun(Lun_iscsi1)[54566]:	2012/09/10_15:24:40 INFO: Enabling target iqn.2005-07.com.example:vdisk.iscsi1
SCSTLun(Lun_iscsi1)[54566]:	2012/09/10_15:24:40 INFO: Lun_iscsi1 stop : 0
SCSTLun(Lun_iscsi1)[54566]:	2012/09/10_15:24:40 INFO: Lun_iscsi1 stop : 0
Sep 10 15:24:40 Cluster-Server-1 lrmd: [48712]: info: Managed Lun_iscsi1:stop process 54566 exited with return code 0.
Sep 10 15:24:40 Cluster-Server-1 lrmd: [48712]: info: operation stop[24] on Lun_iscsi1 for client 48715: pid 54566 exited with return code 0
Sep 10 15:24:40 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce Lun_iscsi1 after complete stop op (interval=0)
Sep 10 15:24:40 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation Lun_iscsi1_stop_0 (call=24, rc=0, cib-update=46, confirmed=true) ok
Sep 10 15:24:40 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending stop op to history for 'Lun_iscsi1'
Sep 10 15:24:40 Cluster-Server-1 crmd: [48715]: debug: cancel_op: Cancelling op 15 for Target_iscsi1 (Target_iscsi1:15)
Sep 10 15:24:40 Cluster-Server-1 lrmd: [48712]: info: cancel_op: operation monitor[15] on Target_iscsi1 for client 48715, its parameters: crm_feature_set=[3.0.6] CRM_meta_name=[monitor] iqn=[iqn.2005-07.com.example:vdisk.iscsi1] CRM_meta_timeout=[60000] CRM_meta_interval=[10000]  cancelled
Sep 10 15:24:40 Cluster-Server-1 lrmd: [48712]: debug: on_msg_cancel_op: operation 15 cancelled
Sep 10 15:24:40 Cluster-Server-1 crmd: [48715]: debug: cancel_op: Op 15 for Target_iscsi1 (Target_iscsi1:15): cancelled
Sep 10 15:24:40 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=74:9:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=Target_iscsi1_stop_0
Sep 10 15:24:40 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation stop[25] on Target_iscsi1 for client 48715, its parameters: crm_feature_set=[3.0.6] CRM_meta_name=[stop] CRM_meta_timeout=[240000]  to the operation list.
Sep 10 15:24:40 Cluster-Server-1 lrmd: [48712]: info: rsc:Target_iscsi1 stop[25] (pid 54604)
Sep 10 15:24:40 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation Target_iscsi1_monitor_10000 (call=15, status=1, cib-update=0, confirmed=true) Cancelled
Sep 10 15:24:40 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending monitor op to history for 'Target_iscsi1'
SCSTTarget(Target_iscsi1)[54604]:	2012/09/10_15:24:40 INFO: target iqn.2005-07.com.example:vdisk.iscsi1: Stopping...
SCSTTarget(Target_iscsi1)[54604]:	2012/09/10_15:24:40 INFO: target iqn.2005-07.com.example:vdisk.iscsi1: Stopping...
SCSTTarget(Target_iscsi1)[54604]:	2012/09/10_15:24:40 INFO: disabling target iqn.2005-07.com.example:vdisk.iscsi1
SCSTTarget(Target_iscsi1)[54604]:	2012/09/10_15:24:40 INFO: disabling target iqn.2005-07.com.example:vdisk.iscsi1
SCSTTarget(Target_iscsi1)[54604]:	2012/09/10_15:24:40 INFO: deleting target iqn.2005-07.com.example:vdisk.iscsi1
SCSTTarget(Target_iscsi1)[54604]:	2012/09/10_15:24:40 INFO: deleting target iqn.2005-07.com.example:vdisk.iscsi1
SCSTTarget(Target_iscsi1)[54604]:	2012/09/10_15:24:40 INFO: target iqn.2005-07.com.example:vdisk.iscsi1: Stopped.
SCSTTarget(Target_iscsi1)[54604]:	2012/09/10_15:24:40 INFO: target iqn.2005-07.com.example:vdisk.iscsi1: Stopped.
SCSTTarget(Target_iscsi1)[54604]:	2012/09/10_15:24:40 DEBUG: Target_iscsi1 stop : 0
Sep 10 15:24:40 Cluster-Server-1 lrmd: [48712]: info: Managed Target_iscsi1:stop process 54604 exited with return code 0.
Sep 10 15:24:40 Cluster-Server-1 lrmd: [48712]: info: operation stop[25] on Target_iscsi1 for client 48715: pid 54604 exited with return code 0
Sep 10 15:24:40 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce Target_iscsi1 after complete stop op (interval=0)
Sep 10 15:24:40 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation Target_iscsi1_stop_0 (call=25, rc=0, cib-update=47, confirmed=true) ok
Sep 10 15:24:40 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending stop op to history for 'Target_iscsi1'
Sep 10 15:24:40 Cluster-Server-2 cib: [40192]: debug: Forking temp process write_cib_contents
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: info: clone_print:  Master/Slave Set: Device_drive [p_Device_drive]
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:0 active on Cluster-Server-1
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:0 active on Cluster-Server-1
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: info: short_print:      Masters: [ Cluster-Server-1 ]
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: info: short_print:      Slaves: [ Cluster-Server-2 ]
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: info: native_print: LVM_drive	(ocf::nas:LVM2):	Started Cluster-Server-1
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: iSCSI_iscsi1
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: info: native_print:      Target_iscsi1	(ocf::nas:SCSTTarget):	Started Cluster-Server-1
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: info: native_print:      Lun_iscsi1	(ocf::nas:SCSTLun):	Started Cluster-Server-1
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: iSCSI_iscsi2
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: info: native_print:      Target_iscsi2	(ocf::nas:SCSTTarget):	Started Cluster-Server-1
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: info: native_print:      Lun_iscsi2	(ocf::nas:SCSTLun):	Started Cluster-Server-1
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:0: preferring current location (node=Cluster-Server-1, weight=1)
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:1: preferring current location (node=Cluster-Server-2, weight=1)
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_NFS_Server:0
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_NFS_Server:1
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 NFS_Server instances of a possible 2
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_iSCSI_Daemon:0
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_iSCSI_Daemon:1
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 iSCSI_Daemon instances of a possible 2
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_PingD:0
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_PingD:1
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 PingD instances of a possible 2
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_Device_drive:0
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_Device_drive:1
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 Device_drive instances of a possible 2
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:0 master score: 10150
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: info: master_color: Promoting p_Device_drive:0 (Master Cluster-Server-1)
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:1 master score: 10100
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: info: master_color: Device_drive: Promoted 1 instances of a possible 1 to master
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to LVM_drive
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Target_iscsi1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Target_iscsi1
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: info: native_color: Resource Target_iscsi1 cannot run anywhere
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Lun_iscsi1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Lun_iscsi1
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: info: native_color: Resource Lun_iscsi1 cannot run anywhere
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Target_iscsi2
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Lun_iscsi2
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: debug: master_create_actions: Creating actions for Device_drive
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:0	(Started Cluster-Server-1)
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:1	(Started Cluster-Server-2)
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:0	(Started Cluster-Server-1)
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:1	(Started Cluster-Server-2)
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:0	(Started Cluster-Server-1)
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:1	(Started Cluster-Server-2)
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_Device_drive:0	(Master Cluster-Server-1)
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_Device_drive:1	(Slave Cluster-Server-2)
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   LVM_drive	(Started Cluster-Server-1)
Sep 10 15:24:40 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='master-p_Device_drive:0'] does not exist
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: notice: LogActions: Stop    Target_iscsi1	(Cluster-Server-1)
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: notice: LogActions: Stop    Lun_iscsi1	(Cluster-Server-1)
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Target_iscsi2	(Started Cluster-Server-1)
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Lun_iscsi2	(Started Cluster-Server-1)
Sep 10 15:24:40 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update -22 for master-p_Device_drive:0=(null) passed
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_PE_SUCCESS from handle_response() received in state S_POLICY_ENGINE
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log <create_reply_adv origin="process_pe_message" t="crmd" version="3.0.6" subt="response" reference="pe_calc-dc-1347283479-100" crm_task="pe_calc" crm_sys_to="dc" crm_sys_from="pengine" crm-tgraph-in="/var/lib/pengine/pe-input-9.bz2" graph-errors="0" graph-warnings="0" config-errors="0" config-warnings="0" >
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   <crm_xml >
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log     <transition_graph cluster-delay="60s" stonith-timeout="60s" failed-stop-offset="INFINITY" failed-start-offset="INFINITY" batch-limit="30" transition_id="9" >
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="0" >
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="79" operation="stopped" operation_key="iSCSI_iscsi1_stopped_0" >
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="74" operation="stop" operation_key="Target_iscsi1_stop_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="75" operation="stop" operation_key="Lun_iscsi1_stop_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="78" operation="stop" operation_key="iSCSI_iscsi1_stop_0" />
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="1" >
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="78" operation="stop" operation_key="iSCSI_iscsi1_stop_0" >
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="2" >
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="74" operation="stop" operation_key="Target_iscsi1_stop_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="Target_iscsi1" long-id="iSCSI_iscsi1:Target_iscsi1" class="ocf" provider="nas" type="SCSTTarget" />
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_name="stop" CRM_meta_timeout="240000" crm_feature_set="3.0.6" iqn="iqn.2005-07.com.example:vdisk.iscsi1" />
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="75" operation="stop" operation_key="Lun_iscsi1_stop_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="78" operation="stop" operation_key="iSCSI_iscsi1_stop_0" />
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="3" >
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="75" operation="stop" operation_key="Lun_iscsi1_stop_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="Lun_iscsi1" long-id="iSCSI_iscsi1:Lun_iscsi1" class="ocf" provider="nas" type="SCSTLun" />
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_name="stop" CRM_meta_timeout="240000" crm_feature_set="3.0.6" device_name="iscsi1" handler="vdisk_blockio" lun="0" path="/dev/drive-CSD/iscsi1_iSCSI" target_iqn="iqn.2005-07.com.example:vdisk.iscsi1" />
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:24:40 Cluster-Server-2 cib: [57193]: ERROR: validate_cib_digest: Digest comparision failed: expected 83751b899e758f9b138d060ace084080 (/var/lib/heartbeat/crm/cib.IpZiPD), calculated bd97ef3df10846e783bd64059be77e45
Sep 10 15:24:40 Cluster-Server-2 cib: [57193]: ERROR: retrieveCib: Checksum of /var/lib/heartbeat/crm/cib.RqPk3s failed!  Configuration contents ignored!
Sep 10 15:24:40 Cluster-Server-2 cib: [57193]: ERROR: retrieveCib: Usually this is caused by manual changes, please refer to http://clusterlabs.org/wiki/FAQ#cib_changes_detected
Sep 10 15:24:40 Cluster-Server-2 cib: [57193]: ERROR: crm_abort: write_cib_contents: Triggered fatal assert at io.c:662 : retrieveCib(tmp1, tmp2, FALSE) != NULL
Sep 10 15:24:40 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:24:40 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-probe_complete" name="probe_complete" value="true" />
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="78" operation="stop" operation_key="iSCSI_iscsi1_stop_0" />
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="4" >
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="13" operation="all_stopped" operation_key="all_stopped" >
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes crm_feature_set="3.0.6" />
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="74" operation="stop" operation_key="Target_iscsi1_stop_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="75" operation="stop" operation_key="Lun_iscsi1_stop_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:24:40 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log     </transition_graph>
Sep 10 15:24:40 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-pingd" name="pingd" value="100" />
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   </crm_xml>
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log </create_reply_adv>
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:24:40 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 76 for probe_complete=true passed
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_INVOKE
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: unpack_graph: Unpacked transition 9: 5 actions in 5 synapses
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: info: do_te_invoke: Processing graph 9 (ref=pe_calc-dc-1347283479-100) derived from /var/lib/pengine/pe-input-9.bz2
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.10.1 -> 0.10.2 (S_TRANSITION_ENGINE)
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.10.2 -> 0.10.3 (S_TRANSITION_ENGINE)
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.10.3 -> 0.10.4 (S_TRANSITION_ENGINE)
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.10.4 -> 0.10.5 (S_TRANSITION_ENGINE)
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: te_pseudo_action: Pseudo action 78 fired and confirmed
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 75: stop Lun_iscsi1_stop_0 on Cluster-Server-1
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 9 (Complete=0, Pending=1, Fired=2, Skipped=0, Incomplete=3, Source=/var/lib/pengine/pe-input-9.bz2): In-progress
Sep 10 15:24:40 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 78 for pingd=100 passed
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.10.5 -> 0.10.6 (S_TRANSITION_ENGINE)
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 9 (Complete=1, Pending=1, Fired=0, Skipped=0, Incomplete=3, Source=/var/lib/pengine/pe-input-9.bz2): In-progress
Sep 10 15:24:40 Cluster-Server-2 cib: [40192]: WARN: Managed write_cib_contents process 57193 killed by signal 6 [SIGABRT - Abort].
Sep 10 15:24:40 Cluster-Server-2 cib: [40192]: ERROR: Managed write_cib_contents process 57193 dumped core
Sep 10 15:24:40 Cluster-Server-2 cib: [40192]: ERROR: cib_diskwrite_complete: Disk write failed: status=134, signo=6, exitcode=0
Sep 10 15:24:40 Cluster-Server-2 cib: [40192]: ERROR: cib_diskwrite_complete: Disabling disk writes after write failure
Sep 10 15:24:40 Cluster-Server-2 pengine: [40196]: notice: process_pe_message: Transition 9: PEngine Input stored in: /var/lib/pengine/pe-input-9.bz2
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.10.6 -> 0.10.7 (S_TRANSITION_ENGINE)
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action Lun_iscsi1_stop_0 (75) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 74: stop Target_iscsi1_stop_0 on Cluster-Server-1
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 9 (Complete=2, Pending=1, Fired=1, Skipped=0, Incomplete=2, Source=/var/lib/pengine/pe-input-9.bz2): In-progress
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.10.7 -> 0.10.8 (S_TRANSITION_ENGINE)
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action Target_iscsi1_stop_0 (74) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: te_pseudo_action: Pseudo action 79 fired and confirmed
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: te_pseudo_action: Pseudo action 13 fired and confirmed
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 9 (Complete=3, Pending=0, Fired=2, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-9.bz2): In-progress
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: notice: run_graph: ==== Transition 9 (Complete=5, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-9.bz2): Complete
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: te_graph_trigger: Transition 9 is now complete
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Transition 9 status: done - <null>
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_TE_SUCCESS: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_TE_SUCCESS from notify_crmd() received in state S_TRANSITION_ENGINE
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: Starting PEngine Recheck Timer
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started PEngine Recheck Timer (I_PE_CALC:900000ms), src=201
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:24:40 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:24:42 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_resource: fail-count-Target_iscsi1=<null>
Sep 10 15:24:42 Cluster-Server-1 attrd: [48713]: info: find_hash_entry: Creating hash entry for fail-count-Target_iscsi1
Sep 10 15:24:42 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: (null), Current: (null), Stored: (null)
Sep 10 15:24:42 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_resource: fail-count-Lun_iscsi1=<null>
Sep 10 15:24:42 Cluster-Server-1 attrd: [48713]: info: find_hash_entry: Creating hash entry for fail-count-Lun_iscsi1
Sep 10 15:24:42 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: (null), Current: (null), Stored: (null)
Sep 10 15:24:42 Cluster-Server-1 cib: [48709]: info: apply_xml_diff: Digest mis-match: expected a700b90829c779c3d702f6268cbfa4cf, calculated f32705eada43e94fdc6e59e5d40ae5cf
Sep 10 15:24:42 Cluster-Server-1 cib: [48709]: notice: cib_process_diff: Diff 0.10.8 -> 0.10.9 not applied to 0.10.8: Failed application of an update diff
Sep 10 15:24:42 Cluster-Server-1 cib: [48709]: info: cib_server_process_diff: Requesting re-sync from peer
Sep 10 15:24:42 Cluster-Server-1 crmd: [48715]: info: delete_resource: Removing resource Target_iscsi1 for 54695_crm_resource (internal) on Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-1 lrmd: [48712]: debug: lrmd_rsc_destroy: removing resource Target_iscsi1
Sep 10 15:24:42 Cluster-Server-1 crmd: [48715]: debug: delete_rsc_entry: sync: Sending delete op for Target_iscsi1
Sep 10 15:24:42 Cluster-Server-1 crmd: [48715]: info: notify_deleted: Notifying 54695_crm_resource on Cluster-Server-1 that Target_iscsi1 was deleted
Sep 10 15:24:42 Cluster-Server-1 crmd: [48715]: WARN: decode_transition_key: Bad UUID (crm-resource-54695) in sscanf result (3) for 0:0:crm-resource-54695
Sep 10 15:24:42 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: send_direct_ack: Updating resouce Target_iscsi1 after complete delete op (interval=60000)
Sep 10 15:24:42 Cluster-Server-1 crmd: [48715]: debug: send_direct_ack: ACK'ing resource op Target_iscsi1_delete_60000 from 0:0:crm-resource-54695: lrm_invoke-lrmd-1347283482-22
Sep 10 15:24:42 Cluster-Server-1 crmd: [48715]: debug: notify_deleted: Triggering a refresh after 54695_crm_resource deleted Target_iscsi1 from the LRM
Sep 10 15:24:42 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: cib_query: //cib/configuration/crm_config//cluster_property_set//nvpair[@name='last-lrm-refresh'] does not exist
Sep 10 15:24:42 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for /cib (/cib)
Sep 10 15:24:42 Cluster-Server-1 cib: [48709]: info: apply_xml_diff: Digest mis-match: expected 5bf27e45eabe2218eedc434e615d9900, calculated 7c9591feac24c870af9ec7a7dab5042e
Sep 10 15:24:42 Cluster-Server-1 cib: [48709]: notice: cib_process_diff: Diff 0.11.1 -> 0.11.2 not applied to 0.11.1: Failed application of an update diff
Sep 10 15:24:42 Cluster-Server-1 cib: [48709]: info: cib_server_process_diff: Requesting re-sync from peer
Sep 10 15:24:42 Cluster-Server-1 crmd: [48715]: info: delete_resource: Removing resource Lun_iscsi1 for 54695_crm_resource (internal) on Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-1 lrmd: [48712]: debug: lrmd_rsc_destroy: removing resource Lun_iscsi1
Sep 10 15:24:42 Cluster-Server-1 crmd: [48715]: debug: delete_rsc_entry: sync: Sending delete op for Lun_iscsi1
Sep 10 15:24:42 Cluster-Server-1 crmd: [48715]: info: notify_deleted: Notifying 54695_crm_resource on Cluster-Server-1 that Lun_iscsi1 was deleted
Sep 10 15:24:42 Cluster-Server-1 crmd: [48715]: WARN: decode_transition_key: Bad UUID (crm-resource-54695) in sscanf result (3) for 0:0:crm-resource-54695
Sep 10 15:24:42 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: send_direct_ack: Updating resouce Lun_iscsi1 after complete delete op (interval=60000)
Sep 10 15:24:42 Cluster-Server-1 crmd: [48715]: debug: send_direct_ack: ACK'ing resource op Lun_iscsi1_delete_60000 from 0:0:crm-resource-54695: lrm_invoke-lrmd-1347283482-23
Sep 10 15:24:42 Cluster-Server-1 crmd: [48715]: debug: notify_deleted: Triggering a refresh after 54695_crm_resource deleted Lun_iscsi1 from the LRM
Sep 10 15:24:42 Cluster-Server-1 cib: [48709]: notice: cib_server_process_diff: Not applying diff 0.11.1 -> 0.11.2 (sync in progress)
Sep 10 15:24:42 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//cluster_property_set//nvpair[@name='last-lrm-refresh'] (/cib/configuration/crm_config/cluster_property_set/nvpair[8])
Sep 10 15:24:42 Cluster-Server-1 crmd: [48715]: debug: find_nvpair_attr_delegate: Match <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1347283482" />
Sep 10 15:24:42 Cluster-Server-1 cib: [48709]: notice: cib_server_process_diff: Not applying diff 0.11.1 -> 0.11.2 (sync in progress)
Sep 10 15:24:42 Cluster-Server-1 cib: [48709]: info: cib_replace_notify: Replaced: -1.-1.-1 -> 0.11.2 from Cluster-Server-2
Sep 10 15:24:42 Cluster-Server-1 crmd: [48715]: debug: do_cib_replaced: Updating the CIB after a replace: DC=false
Sep 10 15:24:42 Cluster-Server-1 attrd: [48713]: info: do_cib_replaced: Sending full refresh
Sep 10 15:24:42 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_Device_drive:0 (10000)
Sep 10 15:24:42 Cluster-Server-1 cib: [48709]: info: apply_xml_diff: Digest mis-match: expected e587dd2fa3e5e20e2d3eacfcc7f57c03, calculated ae649f492750d75e8e2fb5f04f74e3b9
Sep 10 15:24:42 Cluster-Server-1 cib: [48709]: notice: cib_process_diff: Diff 0.11.5 -> 0.11.6 not applied to 0.11.5: Failed application of an update diff
Sep 10 15:24:42 Cluster-Server-1 cib: [48709]: info: cib_server_process_diff: Requesting re-sync from peer
Sep 10 15:24:42 Cluster-Server-1 cib: [48709]: notice: cib_server_process_diff: Not applying diff 0.11.5 -> 0.11.6 (sync in progress)
Sep 10 15:24:42 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='master-p_Device_drive:0'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[3])
Sep 10 15:24:42 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-master-p_Device_drive.0" name="master-p_Device_drive:0" value="10000" />
Sep 10 15:24:42 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: pingd (100)
Sep 10 15:24:42 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Sep 10 15:24:42 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:24:42 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-probe_complete" name="probe_complete" value="true" />
Sep 10 15:24:42 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:24:42 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-pingd" name="pingd" value="100" />
Sep 10 15:24:42 Cluster-Server-1 cib: [48709]: notice: cib_server_process_diff: Not applying diff 0.11.6 -> 0.11.7 (sync in progress)
Sep 10 15:24:42 Cluster-Server-1 cib: [48709]: info: cib_replace_notify: Replaced: -1.-1.-1 -> 0.11.7 from Cluster-Server-2
Sep 10 15:24:42 Cluster-Server-1 crmd: [48715]: debug: do_cib_replaced: Updating the CIB after a replace: DC=false
Sep 10 15:24:42 Cluster-Server-1 attrd: [48713]: info: do_cib_replaced: Sending full refresh
Sep 10 15:24:42 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_Device_drive:0 (10000)
Sep 10 15:24:42 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='master-p_Device_drive:0'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[3])
Sep 10 15:24:42 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-master-p_Device_drive.0" name="master-p_Device_drive:0" value="10000" />
Sep 10 15:24:42 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: pingd (100)
Sep 10 15:24:42 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Sep 10 15:24:42 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:24:42 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-probe_complete" name="probe_complete" value="true" />
Sep 10 15:24:42 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 80 for master-p_Device_drive:0=10000 passed
Sep 10 15:24:42 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 82 for probe_complete=true passed
Sep 10 15:24:42 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:24:42 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-pingd" name="pingd" value="100" />
Sep 10 15:24:42 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 84 for pingd=100 passed
Sep 10 15:24:42 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 86 for master-p_Device_drive:0=10000 passed
Sep 10 15:24:42 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 88 for probe_complete=true passed
Sep 10 15:24:42 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 90 for pingd=100 passed
Sep 10 15:24:42 Cluster-Server-1 lrmd: [48712]: debug: on_msg_add_rsc:client [48715] adds resource Target_iscsi1
Sep 10 15:24:42 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=14:12:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=Target_iscsi1_monitor_0
Sep 10 15:24:42 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op:2399: copying parameters for rsc Target_iscsi1
Sep 10 15:24:42 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation monitor[26] on Target_iscsi1 for client 48715, its parameters: crm_feature_set=[3.0.6] iqn=[iqn.2005-07.com.example:vdisk.iscsi1] CRM_meta_timeout=[20000]  to the operation list.
Sep 10 15:24:42 Cluster-Server-1 lrmd: [48712]: info: rsc:Target_iscsi1 probe[26] (pid 54703)
Sep 10 15:24:42 Cluster-Server-1 lrmd: [48712]: debug: on_msg_add_rsc:client [48715] adds resource Lun_iscsi1
Sep 10 15:24:42 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=15:12:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=Lun_iscsi1_monitor_0
Sep 10 15:24:42 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op:2399: copying parameters for rsc Lun_iscsi1
Sep 10 15:24:42 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation monitor[27] on Lun_iscsi1 for client 48715, its parameters: path=[/dev/drive-CSD/iscsi1_iSCSI] crm_feature_set=[3.0.6] lun=[0] handler=[vdisk_blockio] device_name=[iscsi1] CRM_meta_timeout=[20000] target_iqn=[iqn.2005-07.com.example:vdisk.iscsi1]  to the operation list.
Sep 10 15:24:42 Cluster-Server-1 lrmd: [48712]: info: rsc:Lun_iscsi1 probe[27] (pid 54709)
SCSTTarget(Target_iscsi1)[54703]:	2012/09/10_15:24:42 DEBUG: Target_iscsi1 monitor : 7
Sep 10 15:24:42 Cluster-Server-1 lrmd: [48712]: WARN: Managed Target_iscsi1:monitor process 54703 exited with return code 7.
Sep 10 15:24:42 Cluster-Server-1 lrmd: [48712]: info: operation monitor[26] on Target_iscsi1 for client 48715: pid 54703 exited with return code 7
Sep 10 15:24:42 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce Target_iscsi1 after complete monitor op (interval=0)
Sep 10 15:24:42 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation Target_iscsi1_monitor_0 (call=26, rc=7, cib-update=57, confirmed=true) not running
Sep 10 15:24:42 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending monitor op to history for 'Target_iscsi1'
SCSTLun(Lun_iscsi1)[54709]:	2012/09/10_15:24:42 INFO: Lun_iscsi1 monitor : 7
SCSTLun(Lun_iscsi1)[54709]:	2012/09/10_15:24:42 INFO: Lun_iscsi1 monitor : 7
Sep 10 15:24:42 Cluster-Server-1 lrmd: [48712]: WARN: Managed Lun_iscsi1:monitor process 54709 exited with return code 7.
Sep 10 15:24:42 Cluster-Server-1 lrmd: [48712]: info: operation monitor[27] on Lun_iscsi1 for client 48715: pid 54709 exited with return code 7
Sep 10 15:24:42 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce Lun_iscsi1 after complete monitor op (interval=0)
Sep 10 15:24:42 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation Lun_iscsi1_monitor_0 (call=27, rc=7, cib-update=58, confirmed=true) not running
Sep 10 15:24:42 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending monitor op to history for 'Lun_iscsi1'
Sep 10 15:24:42 Cluster-Server-1 crmd: [48715]: debug: attrd_update_delegate: Sent update: probe_complete=true for localhost
Sep 10 15:24:42 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crmd: probe_complete=true
Sep 10 15:24:42 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: true, Current: true, Stored: true
Sep 10 15:24:42 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [54765] registered
Sep 10 15:24:42 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:54765] disconnected.
Sep 10 15:24:42 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:54765] is unregistered
Sep 10 15:24:42 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [54767] registered
Sep 10 15:24:42 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:54767] disconnected.
Sep 10 15:24:42 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:54767] is unregistered
Sep 10 15:24:42 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [54769] registered
Sep 10 15:24:42 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:54769] disconnected.
Sep 10 15:24:42 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:54769] is unregistered
Sep 10 15:24:42 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [54771] registered
Sep 10 15:24:42 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:54771] disconnected.
Sep 10 15:24:42 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:54771] is unregistered
Sep 10 15:24:42 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [54780] registered
Sep 10 15:24:42 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:54780] disconnected.
Sep 10 15:24:42 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:54780] is unregistered
Sep 10 15:24:42 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [54789] registered
Sep 10 15:24:42 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:54789] disconnected.
Sep 10 15:24:42 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:54789] is unregistered
Sep 10 15:24:42 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [54796] registered
Sep 10 15:24:42 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:54796] disconnected.
Sep 10 15:24:42 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:54796] is unregistered
Sep 10 15:24:42 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [54803] registered
Sep 10 15:24:42 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:54803] disconnected.
Sep 10 15:24:42 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:54803] is unregistered
Sep 10 15:24:42 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Sep 10 15:24:42 Cluster-Server-1 crmd: [48715]: debug: crm_uptime: Current CPU usage is: 0s, 70000us
Sep 10 15:24:42 Cluster-Server-1 crmd: [48715]: debug: crm_compare_age: Loose: 70000 vs 300000 (usec)
Sep 10 15:24:42 Cluster-Server-1 crmd: [48715]: info: do_election_count_vote: Election 9 (owner: Cluster-Server-2) lost: vote from Cluster-Server-2 (Uptime)
Sep 10 15:24:42 Cluster-Server-1 crmd: [48715]: debug: update_dc: Unset DC. Was Cluster-Server-2
Sep 10 15:24:42 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Sep 10 15:24:42 Cluster-Server-1 crmd: [48715]: debug: do_election_check: Ignore election check: we not in an election
Sep 10 15:24:42 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_PENDING: [ state=S_NOT_DC cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Sep 10 15:24:42 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:24:42 Cluster-Server-1 crmd: [48715]: debug: do_log: FSA: Input I_PENDING from do_election_count_vote() received in state S_NOT_DC
Sep 10 15:24:42 Cluster-Server-1 crmd: [48715]: notice: do_state_transition: State transition S_NOT_DC -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Sep 10 15:24:42 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:24:42 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:24:42 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:24:42 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_START
Sep 10 15:24:42 Cluster-Server-1 crmd: [48715]: debug: crm_timer_start: Started Election Trigger (I_DC_TIMEOUT:20000ms), src=58
Sep 10 15:24:42 Cluster-Server-2 attrd: [40195]: notice: attrd_ais_dispatch: Update relayed from Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from Cluster-Server-1: fail-count-Target_iscsi1=<null>
Sep 10 15:24:42 Cluster-Server-2 attrd: [40195]: info: find_hash_entry: Creating hash entry for fail-count-Target_iscsi1
Sep 10 15:24:42 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: (null), Current: (null), Stored: (null)
Sep 10 15:24:42 Cluster-Server-2 attrd: [40195]: notice: attrd_ais_dispatch: Update relayed from Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from Cluster-Server-1: fail-count-Lun_iscsi1=<null>
Sep 10 15:24:42 Cluster-Server-2 attrd: [40195]: info: find_hash_entry: Creating hash entry for fail-count-Lun_iscsi1
Sep 10 15:24:42 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: (null), Current: (null), Stored: (null)
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-1']//lrm_resource[@id='Target_iscsi1'] (/cib/status/node_state[1]/lrm/lrm_resources/lrm_resource[6])
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_delete): 0.10.8 -> 0.10.9 (S_IDLE)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: No match for deleted action //diff-added//cib//lrm_rsc_op[@id='Target_iscsi1_last_0'] (Target_iscsi1_last_0 on Cluster-Server-1)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:320 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=Target_iscsi1_last_0, magic=0:0;74:9:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66, cib=0.10.9) : Resource op removal
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-1']//lrm_resource[@id='Target_iscsi1'] (origin=Cluster-Server-1/crmd/48, version=0.10.8): ok (rc=0)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_IDLE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes are eligible to run resources.
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 163: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: debug: sync_our_cib: Syncing CIB to Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=Cluster-Server-1/Cluster-Server-1/(null), version=0.10.8): ok (rc=0)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke_callback: Invoking the PE: query=163, ref=pe_calc-dc-1347283482-103, seq=312, quorate=1
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-1']//lrm_resource[@id='Target_iscsi1'] (/cib/status/node_state[1]/lrm/lrm_resources/lrm_resource[6])
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-1']//lrm_resource[@id='Target_iscsi1'] (origin=Cluster-Server-1/crmd/49, version=0.10.9): ok (rc=0)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: unpack_config: Startup probes: enabled
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH timeout: 60000
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH of failed nodes is disabled
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Stop all active resources: false
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Default stickiness: 0
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: notice: unpack_config: On loss of CCM Quorum: Ignore
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: unpack_domains: Unpacking domains
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-1 is online
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-2 is online
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: LVM_drive_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource LVM_drive active on Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_Device_drive:0_last_failure_0 on Cluster-Server-1 returned 8 (master) instead of the expected value: 7 (not running)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Operation monitor found resource p_Device_drive:0 active in master mode on Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: unpack_lrm_rsc_state: Target_iscsi1: Overwriting calculated next role Unknown with requested next role Stopped
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: unpack_lrm_rsc_state: Lun_iscsi1: Overwriting calculated next role Unknown with requested next role Stopped
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_Device_drive:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: unpack_lrm_rsc_state: Lun_iscsi1: Overwriting calculated next role Unknown with requested next role Stopped
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_delete): 0.10.8 -> 0.10.9 (S_POLICY_ENGINE)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: unpack_lrm_rsc_state: Target_iscsi1: Overwriting calculated next role Unknown with requested next role Stopped
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: No match for deleted action //diff-added//cib//lrm_rsc_op[@id='Target_iscsi1_last_0'] (Target_iscsi1_last_0 on Cluster-Server-1)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:320 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=Target_iscsi1_last_0, magic=0:0;74:9:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66, cib=0.10.9) : Resource op removal
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: NFS_Server [p_NFS_Server]
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: iSCSI_Daemon [p_iSCSI_Daemon]
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: PingD [p_PingD]
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: clone_print:  Master/Slave Set: Device_drive [p_Device_drive]
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:0 active on Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:0 active on Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: short_print:      Masters: [ Cluster-Server-1 ]
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: short_print:      Slaves: [ Cluster-Server-2 ]
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: native_print: LVM_drive	(ocf::nas:LVM2):	Started Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: iSCSI_iscsi1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: native_print:      Target_iscsi1	(ocf::nas:SCSTTarget):	Stopped 
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: native_print:      Lun_iscsi1	(ocf::nas:SCSTLun):	Stopped 
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: iSCSI_iscsi2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: native_print:      Target_iscsi2	(ocf::nas:SCSTTarget):	Started Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: native_print:      Lun_iscsi2	(ocf::nas:SCSTLun):	Started Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:0: preferring current location (node=Cluster-Server-1, weight=1)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:1: preferring current location (node=Cluster-Server-2, weight=1)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 164: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_NFS_Server:0
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_NFS_Server:1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 NFS_Server instances of a possible 2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_iSCSI_Daemon:0
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_iSCSI_Daemon:1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 iSCSI_Daemon instances of a possible 2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_PingD:0
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_PingD:1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 PingD instances of a possible 2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_Device_drive:0
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_Device_drive:1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 Device_drive instances of a possible 2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:0 master score: 10150
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: master_color: Promoting p_Device_drive:0 (Master Cluster-Server-1)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:1 master score: 10100
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: master_color: Device_drive: Promoted 1 instances of a possible 1 to master
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to LVM_drive
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Target_iscsi1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Target_iscsi1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: native_color: Resource Target_iscsi1 cannot run anywhere
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Lun_iscsi1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Lun_iscsi1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: native_color: Resource Lun_iscsi1 cannot run anywhere
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Target_iscsi2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Lun_iscsi2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: master_create_actions: Creating actions for Device_drive
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: - <cib admin_epoch="0" epoch="10" num_updates="9" />
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: + <cib epoch="11" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="Cluster-Server-1" update-client="cibadmin" cib-last-written="Mon Sep 10 15:24:39 2012" have-quorum="1" dc-uuid="Cluster-Server-2" >
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: +   <configuration >
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: +     <crm_config >
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <cluster_property_set id="cib-bootstrap-options" >
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: +         <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1347283482" __crm_diff_marker__="added:top" />
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: +       </cluster_property_set>
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: +     </crm_config>
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: +   </configuration>
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: + </cib>
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=Cluster-Server-1/crmd/52, version=0.11.1): ok (rc=0)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:0	(Started Cluster-Server-1)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:1	(Started Cluster-Server-2)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:0	(Started Cluster-Server-1)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:1	(Started Cluster-Server-2)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:0	(Started Cluster-Server-1)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:1	(Started Cluster-Server-2)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_Device_drive:0	(Master Cluster-Server-1)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_Device_drive:1	(Slave Cluster-Server-2)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   LVM_drive	(Started Cluster-Server-1)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Target_iscsi1	(Stopped)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Lun_iscsi1	(Stopped)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Target_iscsi2	(Started Cluster-Server-1)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Lun_iscsi2	(Started Cluster-Server-1)
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-1']//lrm_resource[@id='Lun_iscsi1'] (/cib/status/node_state[1]/lrm/lrm_resources/lrm_resource[6])
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-1']//lrm_resource[@id='Lun_iscsi1'] (origin=Cluster-Server-1/crmd/53, version=0.11.1): ok (rc=0)
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-2']//lrm_resource[@id='Target_iscsi1'] (/cib/status/node_state[2]/lrm/lrm_resources/lrm_resource[7])
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-2']//lrm_resource[@id='Target_iscsi1'] (origin=local/crmd/165, version=0.11.1): ok (rc=0)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: info: delete_resource: Removing resource Target_iscsi1 for 54695_crm_resource (internal) on Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 lrmd: [40194]: debug: lrmd_rsc_destroy: removing resource Target_iscsi1
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: delete_rsc_entry: sync: Sending delete op for Target_iscsi1
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: info: notify_deleted: Notifying 54695_crm_resource on Cluster-Server-1 that Target_iscsi1 was deleted
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: WARN: decode_transition_key: Bad UUID (crm-resource-54695) in sscanf result (3) for 0:0:crm-resource-54695
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: send_direct_ack: Updating resouce Target_iscsi1 after complete delete op (interval=60000)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: send_direct_ack: ACK'ing resource op Target_iscsi1_delete_60000 from 0:0:crm-resource-54695: lrm_invoke-lrmd-1347283482-104
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: notify_deleted: Triggering a refresh after 54695_crm_resource deleted Target_iscsi1 from the LRM
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-2']//lrm_resource[@id='Target_iscsi1'] (/cib/status/node_state[2]/lrm/lrm_resources/lrm_resource[7])
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-2']//lrm_resource[@id='Target_iscsi1'] (origin=local/crmd/166, version=0.11.2): ok (rc=0)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: notice: process_pe_message: Transition 10: PEngine Input stored in: /var/lib/pengine/pe-input-10.bz2
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//cluster_property_set//nvpair[@name='last-lrm-refresh'] (/cib/configuration/crm_config/cluster_property_set/nvpair[8])
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: find_nvpair_attr_delegate: Match <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1347283482" />
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: debug: sync_our_cib: Syncing CIB to Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=Cluster-Server-1/Cluster-Server-1/(null), version=0.11.2): ok (rc=0)
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-1']//lrm_resource[@id='Lun_iscsi1'] (/cib/status/node_state[1]/lrm/lrm_resources/lrm_resource[6])
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-1']//lrm_resource[@id='Lun_iscsi1'] (origin=Cluster-Server-1/crmd/54, version=0.11.3): ok (rc=0)
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=Cluster-Server-1/crmd/56, version=0.11.4): ok (rc=0)
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/168, version=0.11.5): ok (rc=0)
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-2']//lrm_resource[@id='Lun_iscsi1'] (/cib/status/node_state[2]/lrm/lrm_resources/lrm_resource[2])
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-2']//lrm_resource[@id='Lun_iscsi1'] (origin=local/crmd/169, version=0.11.5): ok (rc=0)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: info: delete_resource: Removing resource Lun_iscsi1 for 54695_crm_resource (internal) on Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 lrmd: [40194]: debug: lrmd_rsc_destroy: removing resource Lun_iscsi1
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: delete_rsc_entry: sync: Sending delete op for Lun_iscsi1
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: info: notify_deleted: Notifying 54695_crm_resource on Cluster-Server-1 that Lun_iscsi1 was deleted
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: WARN: decode_transition_key: Bad UUID (crm-resource-54695) in sscanf result (3) for 0:0:crm-resource-54695
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: send_direct_ack: Updating resouce Lun_iscsi1 after complete delete op (interval=60000)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: send_direct_ack: ACK'ing resource op Lun_iscsi1_delete_60000 from 0:0:crm-resource-54695: lrm_invoke-lrmd-1347283482-105
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: notify_deleted: Triggering a refresh after 54695_crm_resource deleted Lun_iscsi1 from the LRM
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-2']//lrm_resource[@id='Lun_iscsi1'] (/cib/status/node_state[2]/lrm/lrm_resources/lrm_resource[2])
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-2']//lrm_resource[@id='Lun_iscsi1'] (origin=local/crmd/170, version=0.11.6): ok (rc=0)
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//cluster_property_set//nvpair[@name='last-lrm-refresh'] (/cib/configuration/crm_config/cluster_property_set/nvpair[8])
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: find_nvpair_attr_delegate: Match <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1347283482" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.10.9 -> 0.11.1 (S_POLICY_ENGINE)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.11.1) : Non-status change
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause <diff crm_feature_set="3.0.6" >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   <diff-removed admin_epoch="0" epoch="10" num_updates="9" >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     <cib admin_epoch="0" epoch="10" num_updates="9" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   </diff-removed>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   <diff-added >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     <cib epoch="11" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="Cluster-Server-1" update-client="cibadmin" cib-last-written="Mon Sep 10 15:24:39 2012" have-quorum="1" dc-uuid="Cluster-Server-2" >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       <configuration >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         <crm_config >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <cluster_property_set id="cib-bootstrap-options" >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1347283482" __crm_diff_marker__="added:top" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           </cluster_property_set>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         </crm_config>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       </configuration>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     </cib>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   </diff-added>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause </diff>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_delete): 0.11.1 -> 0.11.2 (S_POLICY_ENGINE)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: No match for deleted action //diff-added//cib//lrm_rsc_op[@id='Lun_iscsi1_last_0'] (Lun_iscsi1_last_0 on Cluster-Server-1)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:320 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=Lun_iscsi1_last_0, magic=0:0;75:9:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66, cib=0.11.2) : Resource op removal
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/172, version=0.11.7): ok (rc=0)
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: debug: sync_our_cib: Syncing CIB to Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=Cluster-Server-1/Cluster-Server-1/(null), version=0.11.7): ok (rc=0)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: unpack_config: Startup probes: enabled
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH timeout: 60000
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH of failed nodes is disabled
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Stop all active resources: false
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Default stickiness: 0
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: notice: unpack_config: On loss of CCM Quorum: Ignore
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: unpack_domains: Unpacking domains
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-1 is online
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-2 is online
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke_callback: Invoking the PE: query=164, ref=pe_calc-dc-1347283482-106, seq=312, quorate=1
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_delete): 0.11.1 -> 0.11.2 (S_POLICY_ENGINE)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: No match for deleted action //diff-added//cib//lrm_rsc_op[@id='Target_iscsi1_last_0'] (Target_iscsi1_last_0 on Cluster-Server-2)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:320 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=Target_iscsi1_last_0, magic=0:7;15:7:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66, cib=0.11.2) : Resource op removal
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_delete): 0.11.1 -> 0.11.2 (S_POLICY_ENGINE)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: No match for deleted action //diff-added//cib//lrm_rsc_op[@id='Target_iscsi1_last_0'] (Target_iscsi1_last_0 on Cluster-Server-2)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:320 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=Target_iscsi1_last_0, magic=0:7;15:7:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66, cib=0.11.2) : Resource op removal
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_delete): 0.11.2 -> 0.11.3 (S_POLICY_ENGINE)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: No match for deleted action //diff-added//cib//lrm_rsc_op[@id='Lun_iscsi1_last_0'] (Lun_iscsi1_last_0 on Cluster-Server-1)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:320 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=Lun_iscsi1_last_0, magic=0:0;75:9:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66, cib=0.11.3) : Resource op removal
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: LVM_drive_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource LVM_drive active on Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_Device_drive:0_last_failure_0 on Cluster-Server-1 returned 8 (master) instead of the expected value: 7 (not running)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Operation monitor found resource p_Device_drive:0 active in master mode on Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='master-p_Device_drive:0'] does not exist
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: unpack_lrm_rsc_state: Lun_iscsi1: Overwriting calculated next role Unknown with requested next role Stopped
Sep 10 15:24:42 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update -22 for master-p_Device_drive:0=(null) passed
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_Device_drive:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:24:42 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-pingd" name="pingd" value="100" />
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: unpack_lrm_rsc_state: Lun_iscsi1: Overwriting calculated next role Unknown with requested next role Stopped
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: unpack_lrm_rsc_state: Target_iscsi1: Overwriting calculated next role Unknown with requested next role Stopped
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: NFS_Server [p_NFS_Server]
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: iSCSI_Daemon [p_iSCSI_Daemon]
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: PingD [p_PingD]
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: clone_print:  Master/Slave Set: Device_drive [p_Device_drive]
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:0 active on Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:0 active on Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: short_print:      Masters: [ Cluster-Server-1 ]
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: short_print:      Slaves: [ Cluster-Server-2 ]
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: native_print: LVM_drive	(ocf::nas:LVM2):	Started Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: iSCSI_iscsi1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: native_print:      Target_iscsi1	(ocf::nas:SCSTTarget):	Stopped 
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: native_print:      Lun_iscsi1	(ocf::nas:SCSTLun):	Stopped 
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.11.3 -> 0.11.4 (S_POLICY_ENGINE)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: iSCSI_iscsi2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: native_print:      Target_iscsi2	(ocf::nas:SCSTTarget):	Started Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: native_print:      Lun_iscsi2	(ocf::nas:SCSTLun):	Started Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:0: preferring current location (node=Cluster-Server-1, weight=1)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:1: preferring current location (node=Cluster-Server-2, weight=1)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.11.4 -> 0.11.5 (S_POLICY_ENGINE)
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:24:42 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-probe_complete" name="probe_complete" value="true" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_delete): 0.11.5 -> 0.11.6 (S_POLICY_ENGINE)
Sep 10 15:24:42 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 81 for pingd=100 passed
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_NFS_Server:0
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_NFS_Server:1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 NFS_Server instances of a possible 2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_iSCSI_Daemon:0
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_iSCSI_Daemon:1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 iSCSI_Daemon instances of a possible 2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_PingD:0
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_PingD:1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 PingD instances of a possible 2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_Device_drive:0
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_Device_drive:1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 Device_drive instances of a possible 2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:0 master score: 10150
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: master_color: Promoting p_Device_drive:0 (Master Cluster-Server-1)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:1 master score: 10100
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: master_color: Device_drive: Promoted 1 instances of a possible 1 to master
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to LVM_drive
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Target_iscsi1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Target_iscsi1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: native_color: Resource Target_iscsi1 cannot run anywhere
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: No match for deleted action //diff-added//cib//lrm_rsc_op[@id='Lun_iscsi1_last_0'] (Lun_iscsi1_last_0 on Cluster-Server-2)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:320 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=Lun_iscsi1_last_0, magic=0:7;16:7:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66, cib=0.11.6) : Resource op removal
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_delete): 0.11.5 -> 0.11.6 (S_POLICY_ENGINE)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: No match for deleted action //diff-added//cib//lrm_rsc_op[@id='Lun_iscsi1_last_0'] (Lun_iscsi1_last_0 on Cluster-Server-2)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:320 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=Lun_iscsi1_last_0, magic=0:7;16:7:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66, cib=0.11.6) : Resource op removal
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.11.6 -> 0.11.7 (S_POLICY_ENGINE)
Sep 10 15:24:42 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 83 for probe_complete=true passed
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.11.7 -> 0.11.8 (S_POLICY_ENGINE)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.11.8 -> 0.11.9 (S_POLICY_ENGINE)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.11.9 -> 0.11.10 (S_POLICY_ENGINE)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.11.10 -> 0.11.11 (S_POLICY_ENGINE)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Lun_iscsi1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Lun_iscsi1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: native_color: Resource Lun_iscsi1 cannot run anywhere
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Target_iscsi2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Lun_iscsi2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_create_probe: Probing Target_iscsi1 on Cluster-Server-1 (Stopped)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: master_create_actions: Creating actions for Device_drive
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: info: handle_response: pe_calc calculation pe_calc-dc-1347283482-103 is obsolete
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:0	(Started Cluster-Server-1)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:1	(Started Cluster-Server-2)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:0	(Started Cluster-Server-1)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:1	(Started Cluster-Server-2)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:0	(Started Cluster-Server-1)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:1	(Started Cluster-Server-2)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_Device_drive:0	(Master Cluster-Server-1)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_Device_drive:1	(Slave Cluster-Server-2)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   LVM_drive	(Started Cluster-Server-1)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Target_iscsi1	(Stopped)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Lun_iscsi1	(Stopped)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Target_iscsi2	(Started Cluster-Server-1)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Lun_iscsi2	(Started Cluster-Server-1)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 173: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 174: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 175: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 176: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 177: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 178: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 179: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: info: handle_response: pe_calc calculation pe_calc-dc-1347283482-106 is obsolete
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.11.11 -> 0.11.12 (S_POLICY_ENGINE)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.11.12 -> 0.11.13 (S_POLICY_ENGINE)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: notice: process_pe_message: Transition 11: PEngine Input stored in: /var/lib/pengine/pe-input-11.bz2
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.11.13 -> 0.11.14 (S_POLICY_ENGINE)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.11.14 -> 0.11.15 (S_POLICY_ENGINE)
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='master-p_Device_drive:0'] does not exist
Sep 10 15:24:42 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update -22 for master-p_Device_drive:0=(null) passed
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:24:42 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-pingd" name="pingd" value="100" />
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:24:42 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-probe_complete" name="probe_complete" value="true" />
Sep 10 15:24:42 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 86 for pingd=100 passed
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: unpack_config: Startup probes: enabled
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH timeout: 60000
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH of failed nodes is disabled
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Stop all active resources: false
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Default stickiness: 0
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: notice: unpack_config: On loss of CCM Quorum: Ignore
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: unpack_domains: Unpacking domains
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-1 is online
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-2 is online
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: LVM_drive_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource LVM_drive active on Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_Device_drive:0_last_failure_0 on Cluster-Server-1 returned 8 (master) instead of the expected value: 7 (not running)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Operation monitor found resource p_Device_drive:0 active in master mode on Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_Device_drive:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: NFS_Server [p_NFS_Server]
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: iSCSI_Daemon [p_iSCSI_Daemon]
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: PingD [p_PingD]
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: clone_print:  Master/Slave Set: Device_drive [p_Device_drive]
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:0 active on Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:0 active on Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: short_print:      Masters: [ Cluster-Server-1 ]
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: short_print:      Slaves: [ Cluster-Server-2 ]
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: native_print: LVM_drive	(ocf::nas:LVM2):	Started Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: iSCSI_iscsi1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: native_print:      Target_iscsi1	(ocf::nas:SCSTTarget):	Stopped 
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: native_print:      Lun_iscsi1	(ocf::nas:SCSTLun):	Stopped 
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: iSCSI_iscsi2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: native_print:      Target_iscsi2	(ocf::nas:SCSTTarget):	Started Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: native_print:      Lun_iscsi2	(ocf::nas:SCSTLun):	Started Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:0: preferring current location (node=Cluster-Server-1, weight=1)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:1: preferring current location (node=Cluster-Server-2, weight=1)
Sep 10 15:24:42 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 88 for probe_complete=true passed
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke_callback: Invoking the PE: query=179, ref=pe_calc-dc-1347283482-107, seq=312, quorate=1
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Call 180 : Parsing CIB options
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Shutdown escalation occurs after: 1200000ms
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Checking for expired actions every 900000ms
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.11.15 -> 0.11.16 (S_POLICY_ENGINE)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.11.16 -> 0.11.17 (S_POLICY_ENGINE)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_NFS_Server:0
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_NFS_Server:1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 NFS_Server instances of a possible 2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_iSCSI_Daemon:0
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_iSCSI_Daemon:1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 iSCSI_Daemon instances of a possible 2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_PingD:0
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_PingD:1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 PingD instances of a possible 2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_Device_drive:0
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_Device_drive:1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 Device_drive instances of a possible 2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:0 master score: 10150
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: master_color: Promoting p_Device_drive:0 (Master Cluster-Server-1)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:1 master score: 10100
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: master_color: Device_drive: Promoted 1 instances of a possible 1 to master
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to LVM_drive
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Target_iscsi1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Target_iscsi1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: native_color: Resource Target_iscsi1 cannot run anywhere
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Lun_iscsi1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Lun_iscsi1
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: native_color: Resource Lun_iscsi1 cannot run anywhere
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Target_iscsi2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Lun_iscsi2
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_create_probe: Probing Target_iscsi1 on Cluster-Server-1 (Stopped)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_create_probe: Probing Lun_iscsi1 on Cluster-Server-1 (Stopped)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_create_probe: Probing Target_iscsi1 on Cluster-Server-2 (Stopped)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: native_create_probe: Probing Lun_iscsi1 on Cluster-Server-2 (Stopped)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: debug: master_create_actions: Creating actions for Device_drive
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:0	(Started Cluster-Server-1)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:1	(Started Cluster-Server-2)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:0	(Started Cluster-Server-1)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:1	(Started Cluster-Server-2)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:0	(Started Cluster-Server-1)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:1	(Started Cluster-Server-2)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_Device_drive:0	(Master Cluster-Server-1)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_Device_drive:1	(Slave Cluster-Server-2)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   LVM_drive	(Started Cluster-Server-1)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Target_iscsi1	(Stopped)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Lun_iscsi1	(Stopped)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Target_iscsi2	(Started Cluster-Server-1)
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Lun_iscsi2	(Started Cluster-Server-1)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_PE_SUCCESS from handle_response() received in state S_POLICY_ENGINE
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log <create_reply_adv origin="process_pe_message" t="crmd" version="3.0.6" subt="response" reference="pe_calc-dc-1347283482-107" crm_task="pe_calc" crm_sys_to="dc" crm_sys_from="pengine" crm-tgraph-in="/var/lib/pengine/pe-input-12.bz2" graph-errors="0" graph-warnings="0" config-errors="0" config-warnings="0" >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   <crm_xml >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log     <transition_graph cluster-delay="60s" stonith-timeout="60s" failed-stop-offset="INFINITY" failed-start-offset="INFINITY" batch-limit="30" transition_id="12" >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="0" >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="17" operation="monitor" operation_key="Target_iscsi1_monitor_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="Target_iscsi1" long-id="iSCSI_iscsi1:Target_iscsi1" class="ocf" provider="nas" type="SCSTTarget" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" crm_feature_set="3.0.6" iqn="iqn.2005-07.com.example:vdisk.iscsi1" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="1" >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="14" operation="monitor" operation_key="Target_iscsi1_monitor_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="Target_iscsi1" long-id="iSCSI_iscsi1:Target_iscsi1" class="ocf" provider="nas" type="SCSTTarget" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" crm_feature_set="3.0.6" iqn="iqn.2005-07.com.example:vdisk.iscsi1" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="2" >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="18" operation="monitor" operation_key="Lun_iscsi1_monitor_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="Lun_iscsi1" long-id="iSCSI_iscsi1:Lun_iscsi1" class="ocf" provider="nas" type="SCSTLun" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" crm_feature_set="3.0.6" device_name="iscsi1" handler="vdisk_blockio" lun="0" path="/dev/drive-CSD/iscsi1_iSCSI" target_iqn="iqn.2005-07.com.example:vdisk.iscsi1" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="3" >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="15" operation="monitor" operation_key="Lun_iscsi1_monitor_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="Lun_iscsi1" long-id="iSCSI_iscsi1:Lun_iscsi1" class="ocf" provider="nas" type="SCSTLun" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" crm_feature_set="3.0.6" device_name="iscsi1" handler="vdisk_blockio" lun="0" path="/dev/drive-CSD/iscsi1_iSCSI" target_iqn="iqn.2005-07.com.example:vdisk.iscsi1" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="4" priority="1000000" >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="16" operation="probe_complete" operation_key="probe_complete" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_no_wait="true" crm_feature_set="3.0.6" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="17" operation="monitor" operation_key="Target_iscsi1_monitor_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="18" operation="monitor" operation_key="Lun_iscsi1_monitor_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="5" priority="1000000" >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="13" operation="probe_complete" operation_key="probe_complete" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_no_wait="true" crm_feature_set="3.0.6" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="14" operation="monitor" operation_key="Target_iscsi1_monitor_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="15" operation="monitor" operation_key="Lun_iscsi1_monitor_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="6" >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="12" operation="probe_complete" operation_key="probe_complete" >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes crm_feature_set="3.0.6" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="13" operation="probe_complete" operation_key="probe_complete" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="16" operation="probe_complete" operation_key="probe_complete" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log     </transition_graph>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   </crm_xml>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log </create_reply_adv>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_INVOKE
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: unpack_graph: Unpacked transition 12: 7 actions in 7 synapses
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: info: do_te_invoke: Processing graph 12 (ref=pe_calc-dc-1347283482-107) derived from /var/lib/pengine/pe-input-12.bz2
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 17: monitor Target_iscsi1_monitor_0 on Cluster-Server-2 (local)
Sep 10 15:24:42 Cluster-Server-2 lrmd: [40194]: debug: on_msg_add_rsc:client [40197] adds resource Target_iscsi1
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_lrm_rsc_op: Performing key=17:12:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=Target_iscsi1_monitor_0
Sep 10 15:24:42 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op:2399: copying parameters for rsc Target_iscsi1
Sep 10 15:24:42 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op: add an operation operation monitor[16] on Target_iscsi1 for client 40197, its parameters: crm_feature_set=[3.0.6] iqn=[iqn.2005-07.com.example:vdisk.iscsi1] CRM_meta_timeout=[20000]  to the operation list.
Sep 10 15:24:42 Cluster-Server-2 lrmd: [40194]: info: rsc:Target_iscsi1 probe[16] (pid 57384)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 14: monitor Target_iscsi1_monitor_0 on Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 18: monitor Lun_iscsi1_monitor_0 on Cluster-Server-2 (local)
Sep 10 15:24:42 Cluster-Server-2 lrmd: [40194]: debug: on_msg_add_rsc:client [40197] adds resource Lun_iscsi1
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_lrm_rsc_op: Performing key=18:12:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=Lun_iscsi1_monitor_0
Sep 10 15:24:42 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op:2399: copying parameters for rsc Lun_iscsi1
Sep 10 15:24:42 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op: add an operation operation monitor[17] on Lun_iscsi1 for client 40197, its parameters: path=[/dev/drive-CSD/iscsi1_iSCSI] crm_feature_set=[3.0.6] lun=[0] handler=[vdisk_blockio] device_name=[iscsi1] CRM_meta_timeout=[20000] target_iqn=[iqn.2005-07.com.example:vdisk.iscsi1]  to the operation list.
Sep 10 15:24:42 Cluster-Server-2 lrmd: [40194]: info: rsc:Lun_iscsi1 probe[17] (pid 57385)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 15: monitor Lun_iscsi1_monitor_0 on Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 12 (Complete=0, Pending=4, Fired=4, Skipped=0, Incomplete=3, Source=/var/lib/pengine/pe-input-12.bz2): In-progress
Sep 10 15:24:42 Cluster-Server-2 pengine: [40196]: notice: process_pe_message: Transition 12: PEngine Input stored in: /var/lib/pengine/pe-input-12.bz2
SCSTTarget(Target_iscsi1)[57384]:	2012/09/10_15:24:42 DEBUG: Target_iscsi1 monitor : 7
Sep 10 15:24:42 Cluster-Server-2 lrmd: [40194]: WARN: Managed Target_iscsi1:monitor process 57384 exited with return code 7.
Sep 10 15:24:42 Cluster-Server-2 lrmd: [40194]: info: operation monitor[16] on Target_iscsi1 for client 40197: pid 57384 exited with return code 7
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: do_update_resource: Updating resouce Target_iscsi1 after complete monitor op (interval=0)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: info: process_lrm_event: LRM operation Target_iscsi1_monitor_0 (call=16, rc=7, cib-update=181, confirmed=true) not running
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: update_history_cache: Appending monitor op to history for 'Target_iscsi1'
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.11.17 -> 0.11.18 (S_TRANSITION_ENGINE)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action Target_iscsi1_monitor_0 (17) confirmed on Cluster-Server-2 (rc=0)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 12 (Complete=1, Pending=3, Fired=0, Skipped=0, Incomplete=3, Source=/var/lib/pengine/pe-input-12.bz2): In-progress
SCSTLun(Lun_iscsi1)[57385]:	2012/09/10_15:24:42 INFO: Lun_iscsi1 monitor : 7
SCSTLun(Lun_iscsi1)[57385]:	2012/09/10_15:24:42 INFO: Lun_iscsi1 monitor : 7
Sep 10 15:24:42 Cluster-Server-2 lrmd: [40194]: WARN: Managed Lun_iscsi1:monitor process 57385 exited with return code 7.
Sep 10 15:24:42 Cluster-Server-2 lrmd: [40194]: info: operation monitor[17] on Lun_iscsi1 for client 40197: pid 57385 exited with return code 7
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: do_update_resource: Updating resouce Lun_iscsi1 after complete monitor op (interval=0)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: info: process_lrm_event: LRM operation Lun_iscsi1_monitor_0 (call=17, rc=7, cib-update=182, confirmed=true) not running
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: update_history_cache: Appending monitor op to history for 'Lun_iscsi1'
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.11.18 -> 0.11.19 (S_TRANSITION_ENGINE)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action Lun_iscsi1_monitor_0 (18) confirmed on Cluster-Server-2 (rc=0)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 16: probe_complete probe_complete on Cluster-Server-2 (local) - no waiting
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: attrd_update_delegate: Sent update: probe_complete=true for localhost
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 12 (Complete=2, Pending=2, Fired=1, Skipped=0, Incomplete=2, Source=/var/lib/pengine/pe-input-12.bz2): In-progress
Sep 10 15:24:42 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from crmd: probe_complete=true
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 12 (Complete=3, Pending=2, Fired=0, Skipped=0, Incomplete=2, Source=/var/lib/pengine/pe-input-12.bz2): In-progress
Sep 10 15:24:42 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: true, Current: true, Stored: true
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.11.19 -> 0.11.20 (S_TRANSITION_ENGINE)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action Target_iscsi1_monitor_0 (14) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 12 (Complete=4, Pending=1, Fired=0, Skipped=0, Incomplete=2, Source=/var/lib/pengine/pe-input-12.bz2): In-progress
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.11.20 -> 0.11.21 (S_TRANSITION_ENGINE)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action Lun_iscsi1_monitor_0 (15) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 13: probe_complete probe_complete on Cluster-Server-1 - no waiting
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: te_pseudo_action: Pseudo action 12 fired and confirmed
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 12 (Complete=5, Pending=0, Fired=2, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-12.bz2): In-progress
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: notice: run_graph: ==== Transition 12 (Complete=7, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-12.bz2): Complete
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: te_graph_trigger: Transition 12 is now complete
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Transition 12 status: done - <null>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_TE_SUCCESS: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_TE_SUCCESS from notify_crmd() received in state S_TRANSITION_ENGINE
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: Starting PEngine Recheck Timer
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started PEngine Recheck Timer (I_PE_CALC:900000ms), src=218
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_replace): 0.11.21 -> 0.12.1 (S_IDLE)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.12.1) : Non-status change
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause <diff crm_feature_set="3.0.6" >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   <diff-removed admin_epoch="0" epoch="11" num_updates="21" >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     <cib admin_epoch="0" epoch="11" num_updates="21" >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       <configuration >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         <resources >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <group id="iSCSI_iscsi1" __crm_diff_marker__="removed:top" >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <primitive class="ocf" id="Target_iscsi1" provider="nas" type="SCSTTarget" >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <instance_attributes id="Target_iscsi1-instance_attributes" >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="Target_iscsi1-instance_attributes-iqn" name="iqn" value="iqn.2005-07.com.example:vdisk.iscsi1" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </instance_attributes>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <operations >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="Target_iscsi1-monitor-10" interval="10" name="monitor" timeout="60" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="Target_iscsi1-start-0" interval="0" name="start" timeout="240" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="Target_iscsi1-stop-0" interval="0" name="stop" timeout="240" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </operations>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <meta_attributes id="Target_iscsi1-meta_attributes" >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </meta_attributes>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             </primitive>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <primitive class="ocf" id="Lun_iscsi1" provider="nas" type="SCSTLun" >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <instance_attributes id="Lun_iscsi1-instance_attributes" >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="Lun_iscsi1-instance_attributes-target_iqn" name="target_iqn" value="iqn.2005-07.com.example:vdisk.iscsi1" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="Lun_iscsi1-instance_attributes-lun" name="lun" value="0" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="Lun_iscsi1-instance_attributes-path" name="path" value="/dev/drive-CSD/iscsi1_iSCSI" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="Lun_iscsi1-instance_attributes-device_name" name="device_name" value="iscsi1" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="Lun_iscsi1-instance_attributes-handler" name="handler" value="vdisk_blockio" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </instance_attributes>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <operations >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="Lun_iscsi1-monitor-10" interval="10" name="monitor" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="Lun_iscsi1-start-0" interval="0" name="start" timeout="60" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="Lun_iscsi1-stop-0" interval="0" name="stop" timeout="240" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </operations>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <meta_attributes id="Lun_iscsi1-meta_attributes" >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </meta_attributes>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             </primitive>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <meta_attributes id="iSCSI_iscsi1-meta_attributes" >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <nvpair id="iSCSI_iscsi1-meta_attributes-target-role" name="target-role" value="Stopped" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             </meta_attributes>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           </group>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         </resources>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         <constraints >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <rsc_order first="LVM_drive" id="iSCSI_iscsi1_after_LVM_drive" score="INFINITY" then="iSCSI_iscsi1" __crm_diff_marker__="removed:top" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <rsc_order first="iSCSI_Daemon" id="iSCSI_iscsi1_after_iSCSI_Daemon" score="INFINITY" then="iSCSI_iscsi1" __crm_diff_marker__="removed:top" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <rsc_colocation id="iSCSI_iscsi1_with_LVM_drive" rsc="iSCSI_iscsi1" score="INFINITY" with-rsc="LVM_drive" __crm_diff_marker__="removed:top" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <rsc_colocation id="iSCSI_iscsi1_with_iSCSI_Daemon" rsc="iSCSI_iscsi1" score="INFINITY" with-rsc="iSCSI_Daemon" __crm_diff_marker__="removed:top" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         </constraints>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       </configuration>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     </cib>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   </diff-removed>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   <diff-added >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     <cib epoch="12" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="Cluster-Server-1" update-client="crmd" cib-last-written="Mon Sep 10 15:24:42 2012" have-quorum="1" dc-uuid="Cluster-Server-2" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   </diff-added>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause </diff>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_IDLE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes are eligible to run resources.
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 183: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib_replace_notify: Replaced: 0.11.21 -> 0.12.1 from Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_cib_replaced: Updating the CIB after a replace: DC=true
Sep 10 15:24:42 Cluster-Server-2 attrd: [40195]: info: do_cib_replaced: Sending full refresh
Sep 10 15:24:42 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_Device_drive:1 (10000)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-1: true (overwrite=true) hash_size=2
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-2: true (overwrite=true) hash_size=2
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: update_dc: Unset DC. Was Cluster-Server-2
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: crm_uptime: Current CPU usage is: 0s, 300000us
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: - <cib admin_epoch="0" epoch="11" num_updates="21" >
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: -   <configuration >
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: -     <resources >
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: -       <group id="iSCSI_iscsi1" __crm_diff_marker__="removed:top" >
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: -         <primitive class="ocf" id="Target_iscsi1" provider="nas" type="SCSTTarget" >
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: -           <instance_attributes id="Target_iscsi1-instance_attributes" >
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <nvpair id="Target_iscsi1-instance_attributes-iqn" name="iqn" value="iqn.2005-07.com.example:vdisk.iscsi1" />
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: -           </instance_attributes>
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: -           <operations >
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <op id="Target_iscsi1-monitor-10" interval="10" name="monitor" timeout="60" />
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <op id="Target_iscsi1-start-0" interval="0" name="start" timeout="240" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_election_vote: Started election 9
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started Election Timeout (I_ELECTION_DC:120000ms), src=222
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <op id="Target_iscsi1-stop-0" interval="0" name="stop" timeout="240" />
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: -           </operations>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_election_count_vote: Created voted hash
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: crm_uptime: Current CPU usage is: 0s, 300000us
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: -           <meta_attributes id="Target_iscsi1-meta_attributes" >
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: -           </meta_attributes>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_election_count_vote: Election 9 (current: 9, owner: Cluster-Server-2): Processed vote from Cluster-Server-2 (Recorded)
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: -         </primitive>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: -         <primitive class="ocf" id="Lun_iscsi1" provider="nas" type="SCSTLun" >
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: -           <instance_attributes id="Lun_iscsi1-instance_attributes" >
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <nvpair id="Lun_iscsi1-instance_attributes-target_iqn" name="target_iqn" value="iqn.2005-07.com.example:vdisk.iscsi1" />
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <nvpair id="Lun_iscsi1-instance_attributes-lun" name="lun" value="0" />
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <nvpair id="Lun_iscsi1-instance_attributes-path" name="path" value="/dev/drive-CSD/iscsi1_iSCSI" />
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <nvpair id="Lun_iscsi1-instance_attributes-device_name" name="device_name" value="iscsi1" />
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <nvpair id="Lun_iscsi1-instance_attributes-handler" name="handler" value="vdisk_blockio" />
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: -           </instance_attributes>
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: -           <operations >
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <op id="Lun_iscsi1-monitor-10" interval="10" name="monitor" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <op id="Lun_iscsi1-start-0" interval="0" name="start" timeout="60" />
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <op id="Lun_iscsi1-stop-0" interval="0" name="stop" timeout="240" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: crm_uptime: Current CPU usage is: 0s, 300000us
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: -           </operations>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: crm_compare_age: Win: 300000 vs 0  (usec)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_election_count_vote: Election 9 (current: 9, owner: Cluster-Server-2): Processed no-vote from Cluster-Server-1 (Recorded)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: -           <meta_attributes id="Lun_iscsi1-meta_attributes" >
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: -           </meta_attributes>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_election_check: Destroying voted hash
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: -         </primitive>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_ELECTION_DC: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_election_check ]
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: -         <meta_attributes id="iSCSI_iscsi1-meta_attributes" >
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_ELECTION_DC from do_election_check() received in state S_ELECTION
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: -           <nvpair id="iSCSI_iscsi1-meta_attributes-target-role" name="target-role" value="Stopped" />
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: -         </meta_attributes>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_START
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: -       </group>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_te_control: The transitioner is already active
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_START
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: -     </resources>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pengine
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: -     <constraints >
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: -       <rsc_order first="LVM_drive" id="iSCSI_iscsi1_after_LVM_drive" score="INFINITY" then="iSCSI_iscsi1" __crm_diff_marker__="removed:top" />
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: -       <rsc_order first="iSCSI_Daemon" id="iSCSI_iscsi1_after_iSCSI_Daemon" score="INFINITY" then="iSCSI_iscsi1" __crm_diff_marker__="removed:top" />
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: -       <rsc_colocation id="iSCSI_iscsi1_with_LVM_drive" rsc="iSCSI_iscsi1" score="INFINITY" with-rsc="LVM_drive" __crm_diff_marker__="removed:top" />
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: -       <rsc_colocation id="iSCSI_iscsi1_with_iSCSI_Daemon" rsc="iSCSI_iscsi1" score="INFINITY" with-rsc="iSCSI_Daemon" __crm_diff_marker__="removed:top" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_START
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: -     </constraints>
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: -   </configuration>
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: - </cib>
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started Integration Timer (I_INTEGRATED:180000ms), src=224
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TAKEOVER
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib:diff: + <cib epoch="12" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="Cluster-Server-1" update-client="crmd" cib-last-written="Mon Sep 10 15:24:42 2012" have-quorum="1" dc-uuid="Cluster-Server-2" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: info: do_dc_takeover: Taking over DC status for this partition
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=Cluster-Server-1/cibadmin/2, version=0.12.1): ok (rc=0)
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/184, version=0.12.2): ok (rc=0)
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: debug: cib_process_readwrite: We are still in R/W mode
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/186, version=0.12.4): ok (rc=0)
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/187, version=0.12.5): ok (rc=0)
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='master-p_Device_drive:1'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[3])
Sep 10 15:24:42 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-master-p_Device_drive.1" name="master-p_Device_drive:1" value="10000" />
Sep 10 15:24:42 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: pingd (100)
Sep 10 15:24:42 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:24:42 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-probe_complete" name="probe_complete" value="true" />
Sep 10 15:24:42 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 90 for master-p_Device_drive:1=10000 passed
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:24:42 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-pingd" name="pingd" value="100" />
Sep 10 15:24:42 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 92 for probe_complete=true passed
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//cluster_property_set//nvpair[@name='dc-version'] (/cib/configuration/crm_config/cluster_property_set/nvpair[1])
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: find_nvpair_attr_delegate: Match <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff" />
Sep 10 15:24:42 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 94 for pingd=100 passed
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/189, version=0.12.9): ok (rc=0)
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//cluster_property_set//nvpair[@name='cluster-infrastructure'] (/cib/configuration/crm_config/cluster_property_set/nvpair[2])
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: find_nvpair_attr_delegate: Match <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="cman" />
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_OFFER_ALL
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: initialize_join: join-7: Initializing join data (flag=true)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: join_make_offer: join-7: Sending offer to Cluster-Server-1
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: join_make_offer: join-7: Sending offer to Cluster-Server-2
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: info: do_dc_join_offer_all: join-7: Waiting on 2 outstanding join acks
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke_callback: Discarding PE request in state: S_INTEGRATION
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:42 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/191, version=0.12.10): ok (rc=0)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Call 192 : Parsing CIB options
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Shutdown escalation occurs after: 1200000ms
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Checking for expired actions every 900000ms
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: handle_request: Raising I_JOIN_OFFER: join-7
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_OFFER: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_REQUEST
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: info: update_dc: Set DC to Cluster-Server-2 (3.0.6)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_cl_join_offer_respond: do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: join_query_callback: Respond to join offer join-7
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: join_query_callback: Acknowledging Cluster-Server-2 as our DC
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_REQ
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: Processing req from Cluster-Server-2
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: join-7: Welcoming node Cluster-Server-2 (ref join_request-crmd-1347283482-117)
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: 1 nodes have been integrated into join-7
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Sep 10 15:24:42 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: join-7: Still waiting on 1 outstanding offers
Sep 10 15:24:43 Cluster-Server-1 crmd: [48715]: debug: handle_request: Raising I_JOIN_OFFER: join-7
Sep 10 15:24:43 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_JOIN_OFFER: [ state=S_PENDING cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:24:43 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:24:43 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_REQUEST
Sep 10 15:24:43 Cluster-Server-1 crmd: [48715]: info: update_dc: Set DC to Cluster-Server-2 (3.0.6)
Sep 10 15:24:43 Cluster-Server-1 crmd: [48715]: debug: do_cl_join_offer_respond: do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Sep 10 15:24:43 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:24:43 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='master-p_Device_drive:1'] does not exist
Sep 10 15:24:43 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update -22 for master-p_Device_drive:1=(null) passed
Sep 10 15:24:43 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:24:43 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-pingd" name="pingd" value="100" />
Sep 10 15:24:43 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:24:43 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-probe_complete" name="probe_complete" value="true" />
Sep 10 15:24:43 Cluster-Server-1 crmd: [48715]: debug: join_query_callback: Respond to join offer join-7
Sep 10 15:24:43 Cluster-Server-1 crmd: [48715]: debug: join_query_callback: Acknowledging Cluster-Server-2 as our DC
Sep 10 15:24:43 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 93 for pingd=100 passed
Sep 10 15:24:43 Cluster-Server-1 crmd: [48715]: debug: handle_request: Raising I_JOIN_RESULT: join-7
Sep 10 15:24:43 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_PENDING cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:24:43 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:24:43 Cluster-Server-1 crmd: [48715]: debug: do_cl_join_finalize_respond: Confirming join join-7: join_ack_nack
Sep 10 15:24:43 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_NFS_Server:0 after complete start op (interval=0)
Sep 10 15:24:43 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_NFS_Server:0 after complete monitor op (interval=30000)
Sep 10 15:24:43 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce LVM_drive after complete monitor op (interval=0)
Sep 10 15:24:43 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Lun_iscsi1 after complete monitor op (interval=0)
Sep 10 15:24:43 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_PingD:0 after complete monitor op (interval=0)
Sep 10 15:24:43 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_PingD:0 after complete monitor op (interval=10000)
Sep 10 15:24:43 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Lun_iscsi2 after complete start op (interval=0)
Sep 10 15:24:43 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 95 for probe_complete=true passed
Sep 10 15:24:43 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Lun_iscsi2 after complete monitor op (interval=10000)
Sep 10 15:24:43 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_Device_drive:0 after complete monitor op (interval=0)
Sep 10 15:24:43 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_Device_drive:0 after complete monitor op (interval=10000)
Sep 10 15:24:43 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_iSCSI_Daemon:0 after complete monitor op (interval=0)
Sep 10 15:24:43 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_iSCSI_Daemon:0 after complete monitor op (interval=30000)
Sep 10 15:24:43 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Target_iscsi2 after complete start op (interval=0)
Sep 10 15:24:43 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Target_iscsi2 after complete monitor op (interval=10000)
Sep 10 15:24:43 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Target_iscsi1 after complete monitor op (interval=0)
Sep 10 15:24:43 Cluster-Server-1 crmd: [48715]: debug: do_cl_join_finalize_respond: join-7: Join complete.  Sending local LRM status to Cluster-Server-2
Sep 10 15:24:43 Cluster-Server-1 crmd: [48715]: debug: attrd_update_delegate: Sent update: (null)=(null) for localhost
Sep 10 15:24:43 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_NOT_DC: [ state=S_PENDING cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
Sep 10 15:24:43 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:24:43 Cluster-Server-1 crmd: [48715]: debug: do_log: FSA: Input I_NOT_DC from do_cl_join_finalize_respond() received in state S_PENDING
Sep 10 15:24:43 Cluster-Server-1 crmd: [48715]: notice: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
Sep 10 15:24:43 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:24:43 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:24:43 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:24:43 Cluster-Server-1 attrd: [48713]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
Sep 10 15:24:43 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_Device_drive:0 (10000)
Sep 10 15:24:43 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='master-p_Device_drive:0'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[3])
Sep 10 15:24:43 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-master-p_Device_drive.0" name="master-p_Device_drive:0" value="10000" />
Sep 10 15:24:43 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: pingd (100)
Sep 10 15:24:43 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Sep 10 15:24:43 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:24:43 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-probe_complete" name="probe_complete" value="true" />
Sep 10 15:24:43 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='master-p_Device_drive:1'] does not exist
Sep 10 15:24:43 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update -22 for master-p_Device_drive:1=(null) passed
Sep 10 15:24:43 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:24:43 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-pingd" name="pingd" value="100" />
Sep 10 15:24:43 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:24:43 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-probe_complete" name="probe_complete" value="true" />
Sep 10 15:24:43 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:24:43 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-pingd" name="pingd" value="100" />
Sep 10 15:24:43 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 97 for master-p_Device_drive:0=10000 passed
Sep 10 15:24:43 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 99 for probe_complete=true passed
Sep 10 15:24:43 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 102 for pingd=100 passed
Sep 10 15:24:43 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 104 for probe_complete=true passed
Sep 10 15:24:43 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 106 for pingd=100 passed
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_REQ
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: Processing req from Cluster-Server-1
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: join-7: Welcoming node Cluster-Server-1 (ref join_request-crmd-1347283483-25)
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: 2 nodes have been integrated into join-7
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: check_join_state: join-7: Integration of 2 peers complete: do_dc_join_filter_offer
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_INTEGRATED: [ state=S_INTEGRATION cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes responded to the join offer.
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_START
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started Finalization Timer (I_ELECTION:1800000ms), src=228
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_FINALIZE
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_finalize: Finializing join-7 for 2 clients
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: info: do_dc_join_finalize: join-7: Syncing the CIB from Cluster-Server-2 to the rest of the cluster
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:43 Cluster-Server-2 cib: [40192]: debug: sync_our_cib: Syncing CIB to all peers
Sep 10 15:24:43 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/194, version=0.12.12): ok (rc=0)
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by finalize_sync_callback in state: S_FINALIZE_JOIN
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: check_join_state: join-7: Still waiting on 2 integrated nodes
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: finalize_sync_callback: Notifying 2 clients of join-7 results
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: finalize_join_for: join-7: ACK'ing join request from Cluster-Server-1, state member
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: finalize_join_for: join-7: ACK'ing join request from Cluster-Server-2, state member
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: handle_request: Raising I_JOIN_RESULT: join-7
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_cl_join_finalize_respond: Confirming join join-7: join_ack_nack
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_NFS_Server:1 after complete start op (interval=0)
Sep 10 15:24:43 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/195, version=0.12.13): ok (rc=0)
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_NFS_Server:1 after complete monitor op (interval=30000)
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce LVM_drive after complete monitor op (interval=0)
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce Lun_iscsi1 after complete monitor op (interval=0)
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce Lun_iscsi2 after complete monitor op (interval=0)
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_PingD:1 after complete monitor op (interval=0)
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_PingD:1 after complete monitor op (interval=10000)
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_Device_drive:1 after complete monitor op (interval=0)
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_Device_drive:1 after complete monitor op (interval=20000)
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce Target_iscsi1 after complete monitor op (interval=0)
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce Target_iscsi2 after complete monitor op (interval=0)
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_iSCSI_Daemon:1 after complete monitor op (interval=0)
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_iSCSI_Daemon:1 after complete monitor op (interval=30000)
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_cl_join_finalize_respond: join-7: Join complete.  Sending local LRM status to Cluster-Server-2
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_ack: Ignoring op=join_ack_nack message from Cluster-Server-2
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:43 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/196, version=0.12.14): ok (rc=0)
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: info: do_dc_join_ack: join-7: Updating node state to member for Cluster-Server-2
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: info: erase_status_tag: Deleting xpath: //node_state[@uname='Cluster-Server-2']/lrm
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_ack: join-7: Registered callback for LRM update 198
Sep 10 15:24:43 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-2']/lrm (/cib/status/node_state[2]/lrm)
Sep 10 15:24:43 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-2']/lrm (origin=local/crmd/197, version=0.12.15): ok (rc=0)
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: erase_xpath_callback: Deletion of "//node_state[@uname='Cluster-Server-2']/lrm": ok (rc=0)
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: join_update_complete_callback: Join update 198 complete
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by join_update_complete_callback in state: S_FINALIZE_JOIN
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: check_join_state: join-7: Still waiting on 1 finalized nodes
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: info: do_dc_join_ack: join-7: Updating node state to member for Cluster-Server-1
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: info: erase_status_tag: Deleting xpath: //node_state[@uname='Cluster-Server-1']/lrm
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_ack: join-7: Registered callback for LRM update 200
Sep 10 15:24:43 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-1']/lrm (/cib/status/node_state[1]/lrm)
Sep 10 15:24:43 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-1']/lrm (origin=local/crmd/199, version=0.12.17): ok (rc=0)
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: erase_xpath_callback: Deletion of "//node_state[@uname='Cluster-Server-1']/lrm": ok (rc=0)
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: join_update_complete_callback: Join update 200 complete
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by join_update_complete_callback in state: S_FINALIZE_JOIN
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: check_join_state: join-7 complete: join_update_complete_callback
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_FINALIZED: [ state=S_FINALIZE_JOIN cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-1: true (overwrite=true) hash_size=2
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-2: true (overwrite=true) hash_size=2
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes are eligible to run resources.
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_FINAL
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_final: Ensuring DC, quorum and node attributes are up-to-date
Sep 10 15:24:43 Cluster-Server-2 attrd: [40195]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
Sep 10 15:24:43 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_Device_drive:1 (10000)
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: attrd_update_delegate: Sent update: (null)=(null) for localhost
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: crm_update_quorum: Updating quorum status to true (call=203)
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_CANCEL
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_te_invoke: Cancelling the transition: inactive
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: do_te_invoke:162 - Triggered transition abort (complete=1) : Peer Cancelled
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 204: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:24:43 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/201, version=0.12.19): ok (rc=0)
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.12.18 -> 0.12.19 (S_POLICY_ENGINE)
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.12.19 -> 0.12.20 (S_POLICY_ENGINE)
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.12.20 -> 0.12.21 (S_POLICY_ENGINE)
Sep 10 15:24:43 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/203, version=0.12.21): ok (rc=0)
Sep 10 15:24:43 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='master-p_Device_drive:1'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[3])
Sep 10 15:24:43 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-master-p_Device_drive.1" name="master-p_Device_drive:1" value="10000" />
Sep 10 15:24:43 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: pingd (100)
Sep 10 15:24:43 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: info: unpack_config: Startup probes: enabled
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH timeout: 60000
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH of failed nodes is disabled
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Stop all active resources: false
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Default stickiness: 0
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: notice: unpack_config: On loss of CCM Quorum: Ignore
Sep 10 15:24:43 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: info: unpack_domains: Unpacking domains
Sep 10 15:24:43 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-probe_complete" name="probe_complete" value="true" />
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-1 is online
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-2 is online
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: LVM_drive_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource LVM_drive active on Cluster-Server-1
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: process_orphan_resource: Detected orphan resource Lun_iscsi1 on Cluster-Server-1
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: create_fake_resource: Orphan resource <primitive id="Lun_iscsi1" type="SCSTLun" class="ocf" provider="nas" />
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_Device_drive:0_last_failure_0 on Cluster-Server-1 returned 8 (master) instead of the expected value: 7 (not running)
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Operation monitor found resource p_Device_drive:0 active in master mode on Cluster-Server-1
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: process_orphan_resource: Detected orphan resource Target_iscsi1 on Cluster-Server-1
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: create_fake_resource: Orphan resource <primitive id="Target_iscsi1" type="SCSTTarget" class="ocf" provider="nas" />
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_Device_drive:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: NFS_Server [p_NFS_Server]
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: iSCSI_Daemon [p_iSCSI_Daemon]
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: PingD [p_PingD]
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: info: clone_print:  Master/Slave Set: Device_drive [p_Device_drive]
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:0 active on Cluster-Server-1
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:0 active on Cluster-Server-1
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: info: short_print:      Masters: [ Cluster-Server-1 ]
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: info: short_print:      Slaves: [ Cluster-Server-2 ]
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: info: native_print: LVM_drive	(ocf::nas:LVM2):	Started Cluster-Server-1
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: iSCSI_iscsi2
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: info: native_print:      Target_iscsi2	(ocf::nas:SCSTTarget):	Started Cluster-Server-1
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: info: native_print:      Lun_iscsi2	(ocf::nas:SCSTLun):	Started Cluster-Server-1
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:0: preferring current location (node=Cluster-Server-1, weight=1)
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:1: preferring current location (node=Cluster-Server-2, weight=1)
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_NFS_Server:0
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_NFS_Server:1
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 NFS_Server instances of a possible 2
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_iSCSI_Daemon:0
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_iSCSI_Daemon:1
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 iSCSI_Daemon instances of a possible 2
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_PingD:0
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_PingD:1
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 PingD instances of a possible 2
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_Device_drive:0
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_Device_drive:1
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 Device_drive instances of a possible 2
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:0 master score: 10150
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: info: master_color: Promoting p_Device_drive:0 (Master Cluster-Server-1)
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:1 master score: 10100
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: info: master_color: Device_drive: Promoted 1 instances of a possible 1 to master
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to LVM_drive
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Target_iscsi2
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Lun_iscsi2
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Lun_iscsi1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Lun_iscsi1
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Target_iscsi1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Target_iscsi1
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: debug: master_create_actions: Creating actions for Device_drive
Sep 10 15:24:43 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:24:43 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-pingd" name="pingd" value="100" />
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:0	(Started Cluster-Server-1)
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:1	(Started Cluster-Server-2)
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:0	(Started Cluster-Server-1)
Sep 10 15:24:43 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 96 for master-p_Device_drive:1=10000 passed
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:1	(Started Cluster-Server-2)
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:0	(Started Cluster-Server-1)
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:1	(Started Cluster-Server-2)
Sep 10 15:24:43 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 98 for probe_complete=true passed
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_Device_drive:0	(Master Cluster-Server-1)
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_Device_drive:1	(Slave Cluster-Server-2)
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   LVM_drive	(Started Cluster-Server-1)
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Target_iscsi2	(Started Cluster-Server-1)
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Lun_iscsi2	(Started Cluster-Server-1)
Sep 10 15:24:43 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 100 for pingd=100 passed
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke_callback: Invoking the PE: query=204, ref=pe_calc-dc-1347283483-121, seq=312, quorate=1
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.12.21 -> 0.12.22 (S_POLICY_ENGINE)
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.12.22 -> 0.12.23 (S_POLICY_ENGINE)
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.12.23 -> 0.12.24 (S_POLICY_ENGINE)
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_PE_SUCCESS from handle_response() received in state S_POLICY_ENGINE
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log <create_reply_adv origin="process_pe_message" t="crmd" version="3.0.6" subt="response" reference="pe_calc-dc-1347283483-121" crm_task="pe_calc" crm_sys_to="dc" crm_sys_from="pengine" crm-tgraph-in="/var/lib/pengine/pe-input-13.bz2" graph-errors="0" graph-warnings="0" config-errors="0" config-warnings="0" >
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   <crm_xml >
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log     <transition_graph cluster-delay="60s" stonith-timeout="60s" failed-stop-offset="INFINITY" failed-start-offset="INFINITY" batch-limit="30" transition_id="13" />
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   </crm_xml>
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log </create_reply_adv>
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_INVOKE
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: unpack_graph: Unpacked transition 13: 0 actions in 0 synapses
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: info: do_te_invoke: Processing graph 13 (ref=pe_calc-dc-1347283483-121) derived from /var/lib/pengine/pe-input-13.bz2
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: print_graph: ## Empty transition graph ##
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: notice: run_graph: ==== Transition 13 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-13.bz2): Complete
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: print_graph: ## Empty transition graph ##
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: te_graph_trigger: Transition 13 is now complete
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Transition 13 status: done - <null>
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_TE_SUCCESS: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_TE_SUCCESS from notify_crmd() received in state S_TRANSITION_ENGINE
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: Starting PEngine Recheck Timer
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started PEngine Recheck Timer (I_PE_CALC:900000ms), src=238
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:24:43 Cluster-Server-2 pengine: [40196]: notice: process_pe_message: Transition 13: PEngine Input stored in: /var/lib/pengine/pe-input-13.bz2
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.12.24 -> 0.12.25 (S_IDLE)
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.12.25 -> 0.12.26 (S_IDLE)
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.12.26 -> 0.12.27 (S_IDLE)
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.12.27 -> 0.12.28 (S_IDLE)
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.12.28 -> 0.12.29 (S_IDLE)
Sep 10 15:24:43 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='master-p_Device_drive:0'] does not exist
Sep 10 15:24:43 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update -22 for master-p_Device_drive:0=(null) passed
Sep 10 15:24:43 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:24:43 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-pingd" name="pingd" value="100" />
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.12.29 -> 0.12.30 (S_IDLE)
Sep 10 15:24:43 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:24:43 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-probe_complete" name="probe_complete" value="true" />
Sep 10 15:24:43 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 103 for pingd=100 passed
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:24:43 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.12.30 -> 0.12.31 (S_IDLE)
Sep 10 15:24:43 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 105 for probe_complete=true passed
Sep 10 15:24:45 Cluster-Server-1 lrmd: [48712]: debug: rsc:Target_iscsi2 monitor[21] (pid 54844)
Sep 10 15:24:45 Cluster-Server-1 lrmd: [48712]: debug: rsc:Lun_iscsi2 monitor[23] (pid 54848)
SCSTTarget(Target_iscsi2)[54844]:	2012/09/10_15:24:45 DEBUG: Target_iscsi2 monitor : 0
Sep 10 15:24:45 Cluster-Server-1 lrmd: [48712]: info: operation monitor[21] on Target_iscsi2 for client 48715: pid 54844 exited with return code 0
SCSTLun(Lun_iscsi2)[54848]:	2012/09/10_15:24:45 INFO: Lun_iscsi2 monitor : 0
SCSTLun(Lun_iscsi2)[54848]:	2012/09/10_15:24:45 INFO: Lun_iscsi2 monitor : 0
Sep 10 15:24:45 Cluster-Server-1 lrmd: [48712]: info: operation monitor[23] on Lun_iscsi2 for client 48715: pid 54848 exited with return code 0
Sep 10 15:24:45 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_NFS_Server:0 monitor[8] (pid 54858)
Sep 10 15:24:45 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_iSCSI_Daemon:0 monitor[5] (pid 54859)
Sep 10 15:24:45 Cluster-Server-1 lrmd: [48712]: info: operation monitor[8] on p_NFS_Server:0 for client 48715: pid 54858 exited with return code 0
Sep 10 15:24:45 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_NFS_Server:0:monitor:stdout) nfsd running

Sep 10 15:24:45 Cluster-Server-1 lrmd: [48712]: info: operation monitor[5] on p_iSCSI_Daemon:0 for client 48715: pid 54859 exited with return code 0
Sep 10 15:24:45 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_PingD:0 monitor[6] (pid 54864)
Sep 10 15:24:45 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_NFS_Server:1 monitor[8] (pid 57812)
Sep 10 15:24:45 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_iSCSI_Daemon:1 monitor[5] (pid 57813)
Sep 10 15:24:45 Cluster-Server-2 lrmd: [40194]: info: operation monitor[8] on p_NFS_Server:1 for client 40197: pid 57812 exited with return code 0
Sep 10 15:24:45 Cluster-Server-2 lrmd: [40194]: debug: RA output: (p_NFS_Server:1:monitor:stdout) nfsd running

Sep 10 15:24:45 Cluster-Server-2 lrmd: [40194]: info: operation monitor[5] on p_iSCSI_Daemon:1 for client 40197: pid 57813 exited with return code 0
Sep 10 15:24:45 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_PingD:1 monitor[6] (pid 57818)
Sep 10 15:24:47 Cluster-Server-1 attrd_updater: [54887]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:24:47 Cluster-Server-1 attrd_updater: [54887]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:24:47 Cluster-Server-1 attrd_updater: [54887]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:24:47 Cluster-Server-1 attrd_updater: [54887]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:24:47 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:24:47 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:24:47 Cluster-Server-1 lrmd: [48712]: info: operation monitor[6] on p_PingD:0 for client 48715: pid 54864 exited with return code 0
Sep 10 15:24:47 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_Device_drive:1 monitor[11] (pid 57868)
drbd(p_Device_drive:1)[57868]:	2012/09/10_15:24:47 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:24:47 Cluster-Server-2 crm_attribute: [57898]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:24:47 Cluster-Server-2 crm_attribute: [57898]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:24:47 Cluster-Server-2 crm_attribute: [57898]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:24:47 Cluster-Server-2 crm_attribute: [57898]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:24:47 Cluster-Server-2 crm_attribute: [57898]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:24:47 Cluster-Server-2 crm_attribute: [57898]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:24:47 Cluster-Server-2 crm_attribute: [57898]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:24:47 Cluster-Server-2 crm_attribute: [57898]: info: determine_host: Mapped Cluster-Server-2 to Cluster-Server-2
Sep 10 15:24:47 Cluster-Server-2 crm_attribute: [57898]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:24:47 Cluster-Server-2 crm_attribute: [57898]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:24:47 Cluster-Server-2 crm_attribute: [57898]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:1=10000 for Cluster-Server-2
Sep 10 15:24:47 Cluster-Server-2 crm_attribute: [57898]: info: attrd_lazy_update: Updated master-p_Device_drive:1=10000 for Cluster-Server-2
Sep 10 15:24:47 Cluster-Server-2 crm_attribute: [57898]: info: main: Update master-p_Device_drive:1=10000 sent via attrd
Sep 10 15:24:47 Cluster-Server-2 crm_attribute: [57898]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:24:47 Cluster-Server-2 crm_attribute: [57898]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:24:47 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:1=10000
Sep 10 15:24:47 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:1)[57868]:	2012/09/10_15:24:47 DEBUG: drive: Exit code 0
drbd(p_Device_drive:1)[57868]:	2012/09/10_15:24:47 DEBUG: drive: Command output: 
Sep 10 15:24:47 Cluster-Server-2 lrmd: [40194]: debug: RA output: (p_Device_drive:1:monitor:stdout) 

Sep 10 15:24:47 Cluster-Server-2 lrmd: [40194]: info: operation monitor[11] on p_Device_drive:1 for client 40197: pid 57868 exited with return code 0
Sep 10 15:24:47 Cluster-Server-2 attrd_updater: [57907]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:24:47 Cluster-Server-2 attrd_updater: [57907]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:24:47 Cluster-Server-2 attrd_updater: [57907]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:24:47 Cluster-Server-2 attrd_updater: [57907]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:24:47 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:24:47 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:24:47 Cluster-Server-2 lrmd: [40194]: info: operation monitor[6] on p_PingD:1 for client 40197: pid 57818 exited with return code 0
Sep 10 15:24:48 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_Device_drive:0 monitor[11] (pid 54888)
drbd(p_Device_drive:0)[54888]:	2012/09/10_15:24:48 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:24:48 Cluster-Server-1 crm_attribute: [54918]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:24:48 Cluster-Server-1 crm_attribute: [54918]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:24:48 Cluster-Server-1 crm_attribute: [54918]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:24:48 Cluster-Server-1 crm_attribute: [54918]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:24:48 Cluster-Server-1 crm_attribute: [54918]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:24:48 Cluster-Server-1 crm_attribute: [54918]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:24:48 Cluster-Server-1 crm_attribute: [54918]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:24:48 Cluster-Server-1 crm_attribute: [54918]: info: determine_host: Mapped Cluster-Server-1 to Cluster-Server-1
Sep 10 15:24:48 Cluster-Server-1 crm_attribute: [54918]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:24:48 Cluster-Server-1 crm_attribute: [54918]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:24:48 Cluster-Server-1 crm_attribute: [54918]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:24:48 Cluster-Server-1 crm_attribute: [54918]: info: attrd_lazy_update: Updated master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:24:48 Cluster-Server-1 crm_attribute: [54918]: info: main: Update master-p_Device_drive:0=10000 sent via attrd
Sep 10 15:24:48 Cluster-Server-1 crm_attribute: [54918]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:24:48 Cluster-Server-1 crm_attribute: [54918]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:24:48 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:0=10000
Sep 10 15:24:48 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:0)[54888]:	2012/09/10_15:24:48 DEBUG: drive: Exit code 0
drbd(p_Device_drive:0)[54888]:	2012/09/10_15:24:48 DEBUG: drive: Command output: 
Sep 10 15:24:48 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_Device_drive:0:monitor:stdout) 

Sep 10 15:24:48 Cluster-Server-1 lrmd: [48712]: info: operation monitor[11] on p_Device_drive:0 for client 48715: pid 54888 exited with return code 8
Sep 10 15:24:55 Cluster-Server-1 lrmd: [48712]: debug: rsc:Target_iscsi2 monitor[21] (pid 55053)
Sep 10 15:24:55 Cluster-Server-1 lrmd: [48712]: debug: rsc:Lun_iscsi2 monitor[23] (pid 55054)
SCSTTarget(Target_iscsi2)[55053]:	2012/09/10_15:24:55 DEBUG: Target_iscsi2 monitor : 0
Sep 10 15:24:55 Cluster-Server-1 lrmd: [48712]: info: operation monitor[21] on Target_iscsi2 for client 48715: pid 55053 exited with return code 0
SCSTLun(Lun_iscsi2)[55054]:	2012/09/10_15:24:55 INFO: Lun_iscsi2 monitor : 0
SCSTLun(Lun_iscsi2)[55054]:	2012/09/10_15:24:55 INFO: Lun_iscsi2 monitor : 0
Sep 10 15:24:55 Cluster-Server-1 lrmd: [48712]: info: operation monitor[23] on Lun_iscsi2 for client 48715: pid 55054 exited with return code 0
Sep 10 15:24:57 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_PingD:0 monitor[6] (pid 55070)
Sep 10 15:24:57 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_PingD:1 monitor[6] (pid 58667)
Sep 10 15:24:58 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_Device_drive:0 monitor[11] (pid 55086)
drbd(p_Device_drive:0)[55086]:	2012/09/10_15:24:58 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:24:58 Cluster-Server-1 crm_attribute: [55116]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:24:58 Cluster-Server-1 crm_attribute: [55116]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:24:58 Cluster-Server-1 crm_attribute: [55116]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:24:58 Cluster-Server-1 crm_attribute: [55116]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:24:58 Cluster-Server-1 crm_attribute: [55116]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:24:58 Cluster-Server-1 crm_attribute: [55116]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:24:58 Cluster-Server-1 crm_attribute: [55116]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:24:58 Cluster-Server-1 crm_attribute: [55116]: info: determine_host: Mapped Cluster-Server-1 to Cluster-Server-1
Sep 10 15:24:58 Cluster-Server-1 crm_attribute: [55116]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:24:58 Cluster-Server-1 crm_attribute: [55116]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:24:58 Cluster-Server-1 crm_attribute: [55116]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:24:58 Cluster-Server-1 crm_attribute: [55116]: info: attrd_lazy_update: Updated master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:24:58 Cluster-Server-1 crm_attribute: [55116]: info: main: Update master-p_Device_drive:0=10000 sent via attrd
Sep 10 15:24:58 Cluster-Server-1 crm_attribute: [55116]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:24:58 Cluster-Server-1 crm_attribute: [55116]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:24:58 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:0=10000
Sep 10 15:24:58 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:0)[55086]:	2012/09/10_15:24:58 DEBUG: drive: Exit code 0
drbd(p_Device_drive:0)[55086]:	2012/09/10_15:24:58 DEBUG: drive: Command output: 
Sep 10 15:24:58 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_Device_drive:0:monitor:stdout) 

Sep 10 15:24:58 Cluster-Server-1 lrmd: [48712]: info: operation monitor[11] on p_Device_drive:0 for client 48715: pid 55086 exited with return code 8
Sep 10 15:24:59 Cluster-Server-1 attrd_updater: [55125]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:24:59 Cluster-Server-1 attrd_updater: [55125]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:24:59 Cluster-Server-1 attrd_updater: [55125]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:24:59 Cluster-Server-1 attrd_updater: [55125]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:24:59 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:24:59 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:24:59 Cluster-Server-1 lrmd: [48712]: info: operation monitor[6] on p_PingD:0 for client 48715: pid 55070 exited with return code 0
Sep 10 15:24:59 Cluster-Server-2 attrd_updater: [58760]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:24:59 Cluster-Server-2 attrd_updater: [58760]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:24:59 Cluster-Server-2 attrd_updater: [58760]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:24:59 Cluster-Server-2 attrd_updater: [58760]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:24:59 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:24:59 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:24:59 Cluster-Server-2 lrmd: [40194]: info: operation monitor[6] on p_PingD:1 for client 40197: pid 58667 exited with return code 0
Sep 10 15:25:05 Cluster-Server-1 lrmd: [48712]: debug: rsc:Target_iscsi2 monitor[21] (pid 55379)
Sep 10 15:25:05 Cluster-Server-1 lrmd: [48712]: debug: rsc:Lun_iscsi2 monitor[23] (pid 55380)
SCSTTarget(Target_iscsi2)[55379]:	2012/09/10_15:25:05 DEBUG: Target_iscsi2 monitor : 0
Sep 10 15:25:05 Cluster-Server-1 lrmd: [48712]: info: operation monitor[21] on Target_iscsi2 for client 48715: pid 55379 exited with return code 0
SCSTLun(Lun_iscsi2)[55380]:	2012/09/10_15:25:05 INFO: Lun_iscsi2 monitor : 0
SCSTLun(Lun_iscsi2)[55380]:	2012/09/10_15:25:05 INFO: Lun_iscsi2 monitor : 0
Sep 10 15:25:05 Cluster-Server-1 lrmd: [48712]: info: operation monitor[23] on Lun_iscsi2 for client 48715: pid 55380 exited with return code 0
Sep 10 15:25:07 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_Device_drive:1 monitor[11] (pid 59458)
drbd(p_Device_drive:1)[59458]:	2012/09/10_15:25:07 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:25:07 Cluster-Server-2 crm_attribute: [59488]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:25:07 Cluster-Server-2 crm_attribute: [59488]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:25:07 Cluster-Server-2 crm_attribute: [59488]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:25:07 Cluster-Server-2 crm_attribute: [59488]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:25:07 Cluster-Server-2 crm_attribute: [59488]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:25:07 Cluster-Server-2 crm_attribute: [59488]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:25:07 Cluster-Server-2 crm_attribute: [59488]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:25:07 Cluster-Server-2 crm_attribute: [59488]: info: determine_host: Mapped Cluster-Server-2 to Cluster-Server-2
Sep 10 15:25:07 Cluster-Server-2 crm_attribute: [59488]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:25:07 Cluster-Server-2 crm_attribute: [59488]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:25:07 Cluster-Server-2 crm_attribute: [59488]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:1=10000 for Cluster-Server-2
Sep 10 15:25:07 Cluster-Server-2 crm_attribute: [59488]: info: attrd_lazy_update: Updated master-p_Device_drive:1=10000 for Cluster-Server-2
Sep 10 15:25:07 Cluster-Server-2 crm_attribute: [59488]: info: main: Update master-p_Device_drive:1=10000 sent via attrd
Sep 10 15:25:07 Cluster-Server-2 crm_attribute: [59488]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:25:07 Cluster-Server-2 crm_attribute: [59488]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:25:07 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:1=10000
Sep 10 15:25:07 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:1)[59458]:	2012/09/10_15:25:07 DEBUG: drive: Exit code 0
drbd(p_Device_drive:1)[59458]:	2012/09/10_15:25:07 DEBUG: drive: Command output: 
Sep 10 15:25:07 Cluster-Server-2 lrmd: [40194]: debug: RA output: (p_Device_drive:1:monitor:stdout) 

Sep 10 15:25:07 Cluster-Server-2 lrmd: [40194]: info: operation monitor[11] on p_Device_drive:1 for client 40197: pid 59458 exited with return code 0
Sep 10 15:25:08 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_Device_drive:0 monitor[11] (pid 55393)
drbd(p_Device_drive:0)[55393]:	2012/09/10_15:25:08 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:25:08 Cluster-Server-1 crm_attribute: [55423]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:25:08 Cluster-Server-1 crm_attribute: [55423]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:25:08 Cluster-Server-1 crm_attribute: [55423]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:25:08 Cluster-Server-1 crm_attribute: [55423]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:25:08 Cluster-Server-1 crm_attribute: [55423]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:25:08 Cluster-Server-1 crm_attribute: [55423]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:25:08 Cluster-Server-1 crm_attribute: [55423]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:25:08 Cluster-Server-1 crm_attribute: [55423]: info: determine_host: Mapped Cluster-Server-1 to Cluster-Server-1
Sep 10 15:25:08 Cluster-Server-1 crm_attribute: [55423]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:25:08 Cluster-Server-1 crm_attribute: [55423]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:25:08 Cluster-Server-1 crm_attribute: [55423]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:25:08 Cluster-Server-1 crm_attribute: [55423]: info: attrd_lazy_update: Updated master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:25:08 Cluster-Server-1 crm_attribute: [55423]: info: main: Update master-p_Device_drive:0=10000 sent via attrd
Sep 10 15:25:08 Cluster-Server-1 crm_attribute: [55423]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:25:08 Cluster-Server-1 crm_attribute: [55423]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:25:08 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:0=10000
Sep 10 15:25:08 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:0)[55393]:	2012/09/10_15:25:08 DEBUG: drive: Exit code 0
drbd(p_Device_drive:0)[55393]:	2012/09/10_15:25:08 DEBUG: drive: Command output: 
Sep 10 15:25:08 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_Device_drive:0:monitor:stdout) 

Sep 10 15:25:08 Cluster-Server-1 lrmd: [48712]: info: operation monitor[11] on p_Device_drive:0 for client 48715: pid 55393 exited with return code 8
Sep 10 15:25:09 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_PingD:0 monitor[6] (pid 55592)
Sep 10 15:25:09 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_PingD:1 monitor[6] (pid 59563)
Sep 10 15:25:12 Cluster-Server-1 attrd_updater: [55636]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:25:12 Cluster-Server-1 attrd_updater: [55636]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:25:12 Cluster-Server-1 attrd_updater: [55636]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:25:12 Cluster-Server-1 attrd_updater: [55636]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:25:12 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:25:12 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:25:12 Cluster-Server-1 lrmd: [48712]: info: operation monitor[6] on p_PingD:0 for client 48715: pid 55592 exited with return code 0
Sep 10 15:25:12 Cluster-Server-2 attrd_updater: [59939]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:25:12 Cluster-Server-2 attrd_updater: [59939]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:25:12 Cluster-Server-2 attrd_updater: [59939]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:25:12 Cluster-Server-2 attrd_updater: [59939]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:25:12 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:25:12 Cluster-Server-2 lrmd: [40194]: info: operation monitor[6] on p_PingD:1 for client 40197: pid 59563 exited with return code 0
Sep 10 15:25:12 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:25:12 Cluster-Server-2 attrd: [40195]: debug: xmlfromIPC: Peer disconnected
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [55732] registered
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:55732] disconnected.
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:55732] is unregistered
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [55734] registered
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:55734] disconnected.
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:55734] is unregistered
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [55736] registered
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:55736] disconnected.
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:55736] is unregistered
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [55738] registered
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:55738] disconnected.
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:55738] is unregistered
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [55747] registered
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:55747] disconnected.
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:55747] is unregistered
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [55756] registered
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:55756] disconnected.
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:55756] is unregistered
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [55763] registered
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:55763] disconnected.
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:55763] is unregistered
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [55770] registered
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:55770] disconnected.
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:55770] is unregistered
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: crm_uptime: Current CPU usage is: 0s, 80000us
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: crm_compare_age: Loose: 80000 vs 310000 (usec)
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: info: do_election_count_vote: Election 10 (owner: Cluster-Server-2) lost: vote from Cluster-Server-2 (Uptime)
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: update_dc: Unset DC. Was Cluster-Server-2
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: do_election_check: Ignore election check: we not in an election
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_PENDING: [ state=S_NOT_DC cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: do_log: FSA: Input I_PENDING from do_election_count_vote() received in state S_NOT_DC
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: notice: do_state_transition: State transition S_NOT_DC -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_START
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: crm_timer_start: Started Election Trigger (I_DC_TIMEOUT:20000ms), src=60
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: handle_request: Raising I_JOIN_OFFER: join-8
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_JOIN_OFFER: [ state=S_PENDING cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_REQUEST
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: info: update_dc: Set DC to Cluster-Server-2 (3.0.6)
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: do_cl_join_offer_respond: do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:25:14 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='master-p_Device_drive:1'] does not exist
Sep 10 15:25:14 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update -22 for master-p_Device_drive:1=(null) passed
Sep 10 15:25:14 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:25:14 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-pingd" name="pingd" value="100" />
Sep 10 15:25:14 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:25:14 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-probe_complete" name="probe_complete" value="true" />
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: join_query_callback: Respond to join offer join-8
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: join_query_callback: Acknowledging Cluster-Server-2 as our DC
Sep 10 15:25:14 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 109 for pingd=100 passed
Sep 10 15:25:14 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 111 for probe_complete=true passed
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: handle_request: Raising I_JOIN_RESULT: join-8
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_PENDING cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: do_cl_join_finalize_respond: Confirming join join-8: join_ack_nack
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_NFS_Server:0 after complete start op (interval=0)
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_NFS_Server:0 after complete monitor op (interval=30000)
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce LVM_drive after complete monitor op (interval=0)
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Lun_iscsi1 after complete monitor op (interval=0)
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_PingD:0 after complete monitor op (interval=0)
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_PingD:0 after complete monitor op (interval=10000)
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Lun_iscsi2 after complete start op (interval=0)
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Lun_iscsi2 after complete monitor op (interval=10000)
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_Device_drive:0 after complete monitor op (interval=0)
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_Device_drive:0 after complete monitor op (interval=10000)
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_iSCSI_Daemon:0 after complete monitor op (interval=0)
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_iSCSI_Daemon:0 after complete monitor op (interval=30000)
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Target_iscsi2 after complete start op (interval=0)
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Target_iscsi2 after complete monitor op (interval=10000)
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Target_iscsi1 after complete monitor op (interval=0)
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: do_cl_join_finalize_respond: join-8: Join complete.  Sending local LRM status to Cluster-Server-2
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: attrd_update_delegate: Sent update: (null)=(null) for localhost
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_NOT_DC: [ state=S_PENDING cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: do_log: FSA: Input I_NOT_DC from do_cl_join_finalize_respond() received in state S_PENDING
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: notice: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:25:14 Cluster-Server-1 attrd: [48713]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
Sep 10 15:25:14 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_Device_drive:0 (10000)
Sep 10 15:25:14 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='master-p_Device_drive:0'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[3])
Sep 10 15:25:14 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-master-p_Device_drive.0" name="master-p_Device_drive:0" value="10000" />
Sep 10 15:25:14 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: pingd (100)
Sep 10 15:25:14 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Sep 10 15:25:14 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:25:14 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-probe_complete" name="probe_complete" value="true" />
Sep 10 15:25:14 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:25:14 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-pingd" name="pingd" value="100" />
Sep 10 15:25:14 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 113 for master-p_Device_drive:0=10000 passed
Sep 10 15:25:14 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 115 for probe_complete=true passed
Sep 10 15:25:14 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 117 for pingd=100 passed
Sep 10 15:25:14 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='master-p_Device_drive:1'] does not exist
Sep 10 15:25:14 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update -22 for master-p_Device_drive:1=(null) passed
Sep 10 15:25:14 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:25:14 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-pingd" name="pingd" value="100" />
Sep 10 15:25:14 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:25:14 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-probe_complete" name="probe_complete" value="true" />
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: debug: on_msg_add_rsc:client [48715] adds resource Target_iscsi3
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=14:14:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=Target_iscsi3_monitor_0
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op:2399: copying parameters for rsc Target_iscsi3
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation monitor[28] on Target_iscsi3 for client 48715, its parameters: crm_feature_set=[3.0.6] iqn=[iqn.2005-07.com.example:vdisk.iscsi3] CRM_meta_timeout=[20000]  to the operation list.
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: info: rsc:Target_iscsi3 probe[28] (pid 55796)
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: debug: on_msg_add_rsc:client [48715] adds resource Lun_iscsi3
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=15:14:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=Lun_iscsi3_monitor_0
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op:2399: copying parameters for rsc Lun_iscsi3
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation monitor[29] on Lun_iscsi3 for client 48715, its parameters: path=[/dev/drive-CSD/iscsi3_iSCSI] crm_feature_set=[3.0.6] lun=[0] handler=[vdisk_blockio] device_name=[iscsi3] CRM_meta_timeout=[20000] target_iqn=[iqn.2005-07.com.example:vdisk.iscsi3]  to the operation list.
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: info: rsc:Lun_iscsi3 probe[29] (pid 55797)
SCSTTarget(Target_iscsi3)[55796]:	2012/09/10_15:25:14 DEBUG: Target_iscsi3 monitor : 7
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: WARN: Managed Target_iscsi3:monitor process 55796 exited with return code 7.
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: info: operation monitor[28] on Target_iscsi3 for client 48715: pid 55796 exited with return code 7
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce Target_iscsi3 after complete monitor op (interval=0)
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation Target_iscsi3_monitor_0 (call=28, rc=7, cib-update=65, confirmed=true) not running
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending monitor op to history for 'Target_iscsi3'
Sep 10 15:25:14 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 120 for pingd=100 passed
Sep 10 15:25:14 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 122 for probe_complete=true passed
SCSTLun(Lun_iscsi3)[55797]:	2012/09/10_15:25:14 INFO: Lun_iscsi3 monitor : 7
SCSTLun(Lun_iscsi3)[55797]:	2012/09/10_15:25:14 INFO: Lun_iscsi3 monitor : 7
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: WARN: Managed Lun_iscsi3:monitor process 55797 exited with return code 7.
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: info: operation monitor[29] on Lun_iscsi3 for client 48715: pid 55797 exited with return code 7
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce Lun_iscsi3 after complete monitor op (interval=0)
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation Lun_iscsi3_monitor_0 (call=29, rc=7, cib-update=66, confirmed=true) not running
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending monitor op to history for 'Lun_iscsi3'
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: attrd_update_delegate: Sent update: probe_complete=true for localhost
Sep 10 15:25:14 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crmd: probe_complete=true
Sep 10 15:25:14 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: true, Current: true, Stored: true
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=84:14:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=Target_iscsi3_start_0
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op:2399: copying parameters for rsc Target_iscsi3
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation start[30] on Target_iscsi3 for client 48715, its parameters: crm_feature_set=[3.0.6] CRM_meta_name=[start] iqn=[iqn.2005-07.com.example:vdisk.iscsi3] CRM_meta_timeout=[240000]  to the operation list.
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: info: rsc:Target_iscsi3 start[30] (pid 55810)
SCSTTarget(Target_iscsi3)[55810]:	2012/09/10_15:25:14 INFO: target iqn.2005-07.com.example:vdisk.iscsi3: Starting...
SCSTTarget(Target_iscsi3)[55810]:	2012/09/10_15:25:14 INFO: target iqn.2005-07.com.example:vdisk.iscsi3: Starting...
SCSTTarget(Target_iscsi3)[55810]:	2012/09/10_15:25:14 INFO: Enabling target iqn.2005-07.com.example:vdisk.iscsi3
SCSTTarget(Target_iscsi3)[55810]:	2012/09/10_15:25:14 INFO: Enabling target iqn.2005-07.com.example:vdisk.iscsi3
SCSTTarget(Target_iscsi3)[55810]:	2012/09/10_15:25:14 DEBUG: SCST target iqn.2005-07.com.example:vdisk.iscsi3: Started.
SCSTTarget(Target_iscsi3)[55810]:	2012/09/10_15:25:14 DEBUG: Target_iscsi3 start : 0
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: info: Managed Target_iscsi3:start process 55810 exited with return code 0.
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: info: operation start[30] on Target_iscsi3 for client 48715: pid 55810 exited with return code 0
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce Target_iscsi3 after complete start op (interval=0)
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation Target_iscsi3_start_0 (call=30, rc=0, cib-update=67, confirmed=true) ok
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending start op to history for 'Target_iscsi3'
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=85:14:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=Target_iscsi3_monitor_10000
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation monitor[31] on Target_iscsi3 for client 48715, its parameters: crm_feature_set=[3.0.6] CRM_meta_name=[monitor] iqn=[iqn.2005-07.com.example:vdisk.iscsi3] CRM_meta_timeout=[60000] CRM_meta_interval=[10000]  to the operation list.
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: info: rsc:Target_iscsi3 monitor[31] (pid 55830)
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=86:14:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=Lun_iscsi3_start_0
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op:2399: copying parameters for rsc Lun_iscsi3
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation start[32] on Lun_iscsi3 for client 48715, its parameters: handler=[vdisk_blockio] CRM_meta_timeout=[60000] CRM_meta_name=[start] target_iqn=[iqn.2005-07.com.example:vdisk.iscsi3] path=[/dev/drive-CSD/iscsi3_iSCSI] crm_feature_set=[3.0.6] lun=[0] device_name=[iscsi3]  to the operation list.
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: info: rsc:Lun_iscsi3 start[32] (pid 55831)
SCSTLun(Lun_iscsi3)[55831]:	2012/09/10_15:25:14 INFO: Disabling target iqn.2005-07.com.example:vdisk.iscsi3
SCSTLun(Lun_iscsi3)[55831]:	2012/09/10_15:25:14 INFO: Disabling target iqn.2005-07.com.example:vdisk.iscsi3
SCSTLun(Lun_iscsi3)[55831]:	2012/09/10_15:25:14 INFO: Starting lun 0 on target iqn.2005-07.com.example:vdisk.iscsi3
SCSTTarget(Target_iscsi3)[55830]:	2012/09/10_15:25:14 DEBUG: Target_iscsi3 monitor : 0
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: info: Managed Target_iscsi3:monitor process 55830 exited with return code 0.
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: info: operation monitor[31] on Target_iscsi3 for client 48715: pid 55830 exited with return code 0
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce Target_iscsi3 after complete monitor op (interval=10000)
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation Target_iscsi3_monitor_10000 (call=31, rc=0, cib-update=68, confirmed=false) ok
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending monitor op to history for 'Target_iscsi3'
SCSTLun(Lun_iscsi3)[55831]:	2012/09/10_15:25:14 INFO: Starting lun 0 on target iqn.2005-07.com.example:vdisk.iscsi3
SCSTLun(Lun_iscsi3)[55831]:	2012/09/10_15:25:14 INFO: Opening device iscsi3, target iqn.2005-07.com.example:vdisk.iscsi3
SCSTLun(Lun_iscsi3)[55831]:	2012/09/10_15:25:14 INFO: Opening device iscsi3, target iqn.2005-07.com.example:vdisk.iscsi3
SCSTLun(Lun_iscsi3)[55831]:	2012/09/10_15:25:14 INFO: Adding LUN 0, device iscsi3, target iqn.2005-07.com.example:vdisk.iscsi3
SCSTLun(Lun_iscsi3)[55831]:	2012/09/10_15:25:14 INFO: Adding LUN 0, device iscsi3, target iqn.2005-07.com.example:vdisk.iscsi3
SCSTLun(Lun_iscsi3)[55831]:	2012/09/10_15:25:14 INFO: Enabling target iqn.2005-07.com.example:vdisk.iscsi3
SCSTLun(Lun_iscsi3)[55831]:	2012/09/10_15:25:14 INFO: Enabling target iqn.2005-07.com.example:vdisk.iscsi3
SCSTLun(Lun_iscsi3)[55831]:	2012/09/10_15:25:14 INFO: Started lun 0 on target iqn.2005-07.com.example:vdisk.iscsi3
SCSTLun(Lun_iscsi3)[55831]:	2012/09/10_15:25:14 INFO: Started lun 0 on target iqn.2005-07.com.example:vdisk.iscsi3
SCSTLun(Lun_iscsi3)[55831]:	2012/09/10_15:25:14 INFO: Lun_iscsi3 start : 0
SCSTLun(Lun_iscsi3)[55831]:	2012/09/10_15:25:14 INFO: Lun_iscsi3 start : 0
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: info: Managed Lun_iscsi3:start process 55831 exited with return code 0.
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: info: operation start[32] on Lun_iscsi3 for client 48715: pid 55831 exited with return code 0
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce Lun_iscsi3 after complete start op (interval=0)
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation Lun_iscsi3_start_0 (call=32, rc=0, cib-update=69, confirmed=true) ok
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending start op to history for 'Lun_iscsi3'
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=87:14:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=Lun_iscsi3_monitor_10000
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation monitor[33] on Lun_iscsi3 for client 48715, its parameters: handler=[vdisk_blockio] CRM_meta_timeout=[20000] CRM_meta_name=[monitor] target_iqn=[iqn.2005-07.com.example:vdisk.iscsi3] path=[/dev/drive-CSD/iscsi3_iSCSI] crm_feature_set=[3.0.6] CRM_meta_interval=[10000] lun=[0] device_name=[iscsi3]  to the operation list.
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: info: rsc:Lun_iscsi3 monitor[33] (pid 55878)
SCSTLun(Lun_iscsi3)[55878]:	2012/09/10_15:25:14 INFO: Lun_iscsi3 monitor : 0
SCSTLun(Lun_iscsi3)[55878]:	2012/09/10_15:25:14 INFO: Lun_iscsi3 monitor : 0
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: info: Managed Lun_iscsi3:monitor process 55878 exited with return code 0.
Sep 10 15:25:14 Cluster-Server-1 lrmd: [48712]: info: operation monitor[33] on Lun_iscsi3 for client 48715: pid 55878 exited with return code 0
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce Lun_iscsi3 after complete monitor op (interval=10000)
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation Lun_iscsi3_monitor_10000 (call=33, rc=0, cib-update=70, confirmed=false) ok
Sep 10 15:25:14 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending monitor op to history for 'Lun_iscsi3'
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_replace): 0.12.31 -> 0.13.1 (S_IDLE)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.13.1) : Non-status change
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause <diff crm_feature_set="3.0.6" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   <diff-removed admin_epoch="0" epoch="12" num_updates="31" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     <cib admin_epoch="0" epoch="12" num_updates="31" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   </diff-removed>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   <diff-added >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     <cib epoch="13" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="Cluster-Server-1" update-client="cibadmin" cib-last-written="Mon Sep 10 15:24:42 2012" have-quorum="1" dc-uuid="Cluster-Server-2" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       <configuration >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         <resources >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <group id="iSCSI_iscsi3" __crm_diff_marker__="added:top" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <primitive class="ocf" id="Target_iscsi3" provider="nas" type="SCSTTarget" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <instance_attributes id="Target_iscsi3-instance_attributes" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="Target_iscsi3-instance_attributes-iqn" name="iqn" value="iqn.2005-07.com.example:vdisk.iscsi3" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </instance_attributes>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <operations >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="Target_iscsi3-monitor-10" interval="10" name="monitor" timeout="60" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="Target_iscsi3-start-0" interval="0" name="start" timeout="240" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="Target_iscsi3-stop-0" interval="0" name="stop" timeout="240" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </operations>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <meta_attributes id="Target_iscsi3-meta_attributes" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="Target_iscsi3-meta_attributes-target-role" name="target-role" value="Started" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </meta_attributes>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             </primitive>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <primitive class="ocf" id="Lun_iscsi3" provider="nas" type="SCSTLun" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <instance_attributes id="Lun_iscsi3-instance_attributes" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="Lun_iscsi3-instance_attributes-target_iqn" name="target_iqn" value="iqn.2005-07.com.example:vdisk.iscsi3" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="Lun_iscsi3-instance_attributes-lun" name="lun" value="0" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="Lun_iscsi3-instance_attributes-path" name="path" value="/dev/drive-CSD/iscsi3_iSCSI" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="Lun_iscsi3-instance_attributes-device_name" name="device_name" value="iscsi3" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="Lun_iscsi3-instance_attributes-handler" name="handler" value="vdisk_blockio" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </instance_attributes>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <operations >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="Lun_iscsi3-monitor-10" interval="10" name="monitor" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="Lun_iscsi3-start-0" interval="0" name="start" timeout="60" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="Lun_iscsi3-stop-0" interval="0" name="stop" timeout="240" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </operations>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <meta_attributes id="Lun_iscsi3-meta_attributes" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="Lun_iscsi3-meta_attributes-target-role" name="target-role" value="Started" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </meta_attributes>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             </primitive>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           </group>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         </resources>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         <constraints >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <rsc_order first="LVM_drive" id="iSCSI_iscsi3_after_LVM_drive" score="INFINITY" then="iSCSI_iscsi3" __crm_diff_marker__="added:top" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <rsc_order first="iSCSI_Daemon" id="iSCSI_iscsi3_after_iSCSI_Daemon" score="INFINITY" then="iSCSI_iscsi3" __crm_diff_marker__="added:top" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <rsc_colocation id="iSCSI_iscsi3_with_LVM_drive" rsc="iSCSI_iscsi3" score="INFINITY" with-rsc="LVM_drive" __crm_diff_marker__="added:top" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <rsc_colocation id="iSCSI_iscsi3_with_iSCSI_Daemon" rsc="iSCSI_iscsi3" score="INFINITY" with-rsc="iSCSI_Daemon" __crm_diff_marker__="added:top" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         </constraints>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       </configuration>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     </cib>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   </diff-added>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause </diff>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_IDLE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes are eligible to run resources.
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 205: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib_replace_notify: Replaced: 0.12.31 -> 0.13.1 from Cluster-Server-1
Sep 10 15:25:14 Cluster-Server-2 attrd: [40195]: info: do_cib_replaced: Sending full refresh
Sep 10 15:25:14 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_Device_drive:1 (10000)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_cib_replaced: Updating the CIB after a replace: DC=true
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-1: true (overwrite=true) hash_size=2
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-2: true (overwrite=true) hash_size=2
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: - <cib admin_epoch="0" epoch="12" num_updates="31" />
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: + <cib epoch="13" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="Cluster-Server-1" update-client="cibadmin" cib-last-written="Mon Sep 10 15:24:42 2012" have-quorum="1" dc-uuid="Cluster-Server-2" >
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: +   <configuration >
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: +     <resources >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <group id="iSCSI_iscsi3" __crm_diff_marker__="added:top" >
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: +         <primitive class="ocf" id="Target_iscsi3" provider="nas" type="SCSTTarget" >
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <instance_attributes id="Target_iscsi3-instance_attributes" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="Target_iscsi3-instance_attributes-iqn" name="iqn" value="iqn.2005-07.com.example:vdisk.iscsi3" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: update_dc: Unset DC. Was Cluster-Server-2
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: +           </instance_attributes>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: crm_uptime: Current CPU usage is: 0s, 310000us
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <operations >
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <op id="Target_iscsi3-monitor-10" interval="10" name="monitor" timeout="60" />
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <op id="Target_iscsi3-start-0" interval="0" name="start" timeout="240" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_election_vote: Started election 10
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <op id="Target_iscsi3-stop-0" interval="0" name="stop" timeout="240" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started Election Timeout (I_ELECTION_DC:120000ms), src=242
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: +           </operations>
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <meta_attributes id="Target_iscsi3-meta_attributes" >
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="Target_iscsi3-meta_attributes-target-role" name="target-role" value="Started" />
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: +           </meta_attributes>
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: +         </primitive>
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: +         <primitive class="ocf" id="Lun_iscsi3" provider="nas" type="SCSTLun" >
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <instance_attributes id="Lun_iscsi3-instance_attributes" >
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="Lun_iscsi3-instance_attributes-target_iqn" name="target_iqn" value="iqn.2005-07.com.example:vdisk.iscsi3" />
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="Lun_iscsi3-instance_attributes-lun" name="lun" value="0" />
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="Lun_iscsi3-instance_attributes-path" name="path" value="/dev/drive-CSD/iscsi3_iSCSI" />
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="Lun_iscsi3-instance_attributes-device_name" name="device_name" value="iscsi3" />
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="Lun_iscsi3-instance_attributes-handler" name="handler" value="vdisk_blockio" />
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: +           </instance_attributes>
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <operations >
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <op id="Lun_iscsi3-monitor-10" interval="10" name="monitor" />
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <op id="Lun_iscsi3-start-0" interval="0" name="start" timeout="60" />
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <op id="Lun_iscsi3-stop-0" interval="0" name="stop" timeout="240" />
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: +           </operations>
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <meta_attributes id="Lun_iscsi3-meta_attributes" >
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="Lun_iscsi3-meta_attributes-target-role" name="target-role" value="Started" />
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: +           </meta_attributes>
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: +         </primitive>
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: +       </group>
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: +     </resources>
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: +     <constraints >
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <rsc_order first="LVM_drive" id="iSCSI_iscsi3_after_LVM_drive" score="INFINITY" then="iSCSI_iscsi3" __crm_diff_marker__="added:top" />
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <rsc_order first="iSCSI_Daemon" id="iSCSI_iscsi3_after_iSCSI_Daemon" score="INFINITY" then="iSCSI_iscsi3" __crm_diff_marker__="added:top" />
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <rsc_colocation id="iSCSI_iscsi3_with_LVM_drive" rsc="iSCSI_iscsi3" score="INFINITY" with-rsc="LVM_drive" __crm_diff_marker__="added:top" />
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <rsc_colocation id="iSCSI_iscsi3_with_iSCSI_Daemon" rsc="iSCSI_iscsi3" score="INFINITY" with-rsc="iSCSI_Daemon" __crm_diff_marker__="added:top" />
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: +     </constraints>
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: +   </configuration>
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib:diff: + </cib>
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=Cluster-Server-1/cibadmin/2, version=0.13.1): ok (rc=0)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_election_count_vote: Created voted hash
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: crm_uptime: Current CPU usage is: 0s, 310000us
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_election_count_vote: Election 10 (current: 10, owner: Cluster-Server-2): Processed vote from Cluster-Server-2 (Recorded)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: crm_uptime: Current CPU usage is: 0s, 310000us
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: crm_compare_age: Win: 310000 vs 0  (usec)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_election_count_vote: Election 10 (current: 10, owner: Cluster-Server-2): Processed no-vote from Cluster-Server-1 (Recorded)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_election_check: Destroying voted hash
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_ELECTION_DC: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_election_check ]
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_ELECTION_DC from do_election_check() received in state S_ELECTION
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_START
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_te_control: The transitioner is already active
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_START
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pengine
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_START
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started Integration Timer (I_INTEGRATED:180000ms), src=244
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TAKEOVER
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: info: do_dc_takeover: Taking over DC status for this partition
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/206, version=0.13.2): ok (rc=0)
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: debug: cib_process_readwrite: We are still in R/W mode
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/208, version=0.13.4): ok (rc=0)
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/209, version=0.13.5): ok (rc=0)
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='master-p_Device_drive:1'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[3])
Sep 10 15:25:14 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-master-p_Device_drive.1" name="master-p_Device_drive:1" value="10000" />
Sep 10 15:25:14 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: pingd (100)
Sep 10 15:25:14 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:25:14 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-probe_complete" name="probe_complete" value="true" />
Sep 10 15:25:14 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 107 for master-p_Device_drive:1=10000 passed
Sep 10 15:25:14 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 109 for probe_complete=true passed
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//cluster_property_set//nvpair[@name='dc-version'] (/cib/configuration/crm_config/cluster_property_set/nvpair[1])
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: find_nvpair_attr_delegate: Match <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff" />
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/211, version=0.13.8): ok (rc=0)
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//cluster_property_set//nvpair[@name='cluster-infrastructure'] (/cib/configuration/crm_config/cluster_property_set/nvpair[2])
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:25:14 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-pingd" name="pingd" value="100" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: find_nvpair_attr_delegate: Match <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="cman" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_OFFER_ALL
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: initialize_join: join-8: Initializing join data (flag=true)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: join_make_offer: join-8: Sending offer to Cluster-Server-1
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/213, version=0.13.9): ok (rc=0)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: join_make_offer: join-8: Sending offer to Cluster-Server-2
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: info: do_dc_join_offer_all: join-8: Waiting on 2 outstanding join acks
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke_callback: Discarding PE request in state: S_INTEGRATION
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:25:14 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 111 for pingd=100 passed
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: handle_request: Raising I_JOIN_OFFER: join-8
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_OFFER: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_REQUEST
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: info: update_dc: Set DC to Cluster-Server-2 (3.0.6)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_cl_join_offer_respond: do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Call 214 : Parsing CIB options
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Shutdown escalation occurs after: 1200000ms
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Checking for expired actions every 900000ms
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: join_query_callback: Respond to join offer join-8
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: join_query_callback: Acknowledging Cluster-Server-2 as our DC
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_REQ
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: Processing req from Cluster-Server-2
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: join-8: Welcoming node Cluster-Server-2 (ref join_request-crmd-1347283514-125)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: 1 nodes have been integrated into join-8
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: join-8: Still waiting on 1 outstanding offers
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_REQ
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: Processing req from Cluster-Server-1
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: join-8: Welcoming node Cluster-Server-1 (ref join_request-crmd-1347283514-28)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: 2 nodes have been integrated into join-8
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: check_join_state: join-8: Integration of 2 peers complete: do_dc_join_filter_offer
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_INTEGRATED: [ state=S_INTEGRATION cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes responded to the join offer.
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_START
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started Finalization Timer (I_ELECTION:1800000ms), src=248
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_FINALIZE
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_finalize: Finializing join-8 for 2 clients
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: info: do_dc_join_finalize: join-8: Syncing the CIB from Cluster-Server-2 to the rest of the cluster
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: debug: sync_our_cib: Syncing CIB to all peers
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/216, version=0.13.12): ok (rc=0)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by finalize_sync_callback in state: S_FINALIZE_JOIN
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: check_join_state: join-8: Still waiting on 2 integrated nodes
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: finalize_sync_callback: Notifying 2 clients of join-8 results
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: finalize_join_for: join-8: ACK'ing join request from Cluster-Server-1, state member
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: finalize_join_for: join-8: ACK'ing join request from Cluster-Server-2, state member
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: handle_request: Raising I_JOIN_RESULT: join-8
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_cl_join_finalize_respond: Confirming join join-8: join_ack_nack
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_NFS_Server:1 after complete start op (interval=0)
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/217, version=0.13.13): ok (rc=0)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_NFS_Server:1 after complete monitor op (interval=30000)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce LVM_drive after complete monitor op (interval=0)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce Lun_iscsi1 after complete monitor op (interval=0)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce Lun_iscsi2 after complete monitor op (interval=0)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_PingD:1 after complete monitor op (interval=0)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_PingD:1 after complete monitor op (interval=10000)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_Device_drive:1 after complete monitor op (interval=0)
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/218, version=0.13.14): ok (rc=0)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_Device_drive:1 after complete monitor op (interval=20000)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce Target_iscsi1 after complete monitor op (interval=0)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce Target_iscsi2 after complete monitor op (interval=0)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_iSCSI_Daemon:1 after complete monitor op (interval=0)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_iSCSI_Daemon:1 after complete monitor op (interval=30000)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_cl_join_finalize_respond: join-8: Join complete.  Sending local LRM status to Cluster-Server-2
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_ack: Ignoring op=join_ack_nack message from Cluster-Server-2
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: info: do_dc_join_ack: join-8: Updating node state to member for Cluster-Server-1
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: info: erase_status_tag: Deleting xpath: //node_state[@uname='Cluster-Server-1']/lrm
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_ack: join-8: Registered callback for LRM update 220
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-1']/lrm (/cib/status/node_state[1]/lrm)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: info: do_dc_join_ack: join-8: Updating node state to member for Cluster-Server-2
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: info: erase_status_tag: Deleting xpath: //node_state[@uname='Cluster-Server-2']/lrm
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_ack: join-8: Registered callback for LRM update 222
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-1']/lrm (origin=local/crmd/219, version=0.13.15): ok (rc=0)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: erase_xpath_callback: Deletion of "//node_state[@uname='Cluster-Server-1']/lrm": ok (rc=0)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: join_update_complete_callback: Join update 220 complete
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by join_update_complete_callback in state: S_FINALIZE_JOIN
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: check_join_state: join-8 complete: join_update_complete_callback
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_FINALIZED: [ state=S_FINALIZE_JOIN cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-1: true (overwrite=true) hash_size=2
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-2: true (overwrite=true) hash_size=2
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes are eligible to run resources.
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_FINAL
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_final: Ensuring DC, quorum and node attributes are up-to-date
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: attrd_update_delegate: Sent update: (null)=(null) for localhost
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: crm_update_quorum: Updating quorum status to true (call=225)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_CANCEL
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_te_invoke: Cancelling the transition: inactive
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: do_te_invoke:162 - Triggered transition abort (complete=1) : Peer Cancelled
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 226: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.13.16 -> 0.13.17 (S_POLICY_ENGINE)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.13.17 -> 0.13.18 (S_POLICY_ENGINE)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.13.18 -> 0.13.19 (S_POLICY_ENGINE)
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-2']/lrm (/cib/status/node_state[2]/lrm)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_delete): 0.13.19 -> 0.13.20 (S_POLICY_ENGINE)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: No match for deleted action //diff-added//cib//lrm_rsc_op[@id='p_NFS_Server:1_last_0'] (p_NFS_Server:1_last_0 on Cluster-Server-2)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:320 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=p_NFS_Server:1_last_0, magic=0:0;7:3:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66, cib=0.13.20) : Resource op removal
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 227: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-2']/lrm (origin=local/crmd/221, version=0.13.20): ok (rc=0)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: erase_xpath_callback: Deletion of "//node_state[@uname='Cluster-Server-2']/lrm": ok (rc=0)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.13.20 -> 0.13.21 (S_POLICY_ENGINE)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Detected LRM refresh - 9 resources updated: Skipping all resource events
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:276 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.13.21) : LRM Refresh
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause <diff crm_feature_set="3.0.6" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   <diff-removed admin_epoch="0" epoch="13" num_updates="20" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     <cib num_updates="20" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   </diff-removed>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   <diff-added >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     <cib epoch="13" num_updates="21" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="Cluster-Server-1" update-client="cibadmin" cib-last-written="Mon Sep 10 15:25:14 2012" have-quorum="1" dc-uuid="Cluster-Server-2" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       <status >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         <node_state id="Cluster-Server-2" uname="Cluster-Server-2" ha="active" in_ccm="true" crmd="online" join="member" expected="member" crm-debug-origin="do_lrm_query" shutdown="0" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <lrm id="Cluster-Server-2" __crm_diff_marker__="added:top" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <lrm_resources >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <lrm_resource id="p_NFS_Server:1" type="nfs-kernel-server" class="lsb" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <lrm_rsc_op id="p_NFS_Server:1_last_0" operation_key="p_NFS_Server:1_start_0" operation="start" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.6" transition-key="7:3:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" transition-magic="0:0;7:3:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" call-id="7" rc-code="0" op-status="0" interval="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <lrm_rsc_op id="p_NFS_Server:1_monitor_30000" operation_key="p_NFS_Server:1_monitor_30000" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.6" transition-key="8:3:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" transition-magic="0:0;8:3:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" call-id="8" rc-code="0" op-status="0" interval="30000" op-digest="4811cef7f7f94e3a35a70be7916cb2fd" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </lrm_resource>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <lrm_resource id="LVM_drive" type="LVM2" class="ocf" provider="nas" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <lrm_rsc_op id="LVM_drive_last_0" operation_key="LVM_drive_monitor_0" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.6" transition-key="14:5:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" transition-magic="0:7;14:5:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" call-id="10" rc-code="7" op-status="0" interval="0" op-digest="3de128da75b456c2b9e6a8229db6b5e9" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </lrm_resource>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <lrm_resource id="Lun_iscsi1" type="SCSTLun" class="ocf" provider="nas" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <lrm_rsc_op id="Lun_iscsi1_last_0" operation_key="Lun_iscsi1_monitor_0" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.6" transition-key="18:12:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" transition-magic="0:7;18:12:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" call-id="17" rc-code="7" op-status="0" interval="0" op-digest="0c856f3d51c2e0ef24c818b04b9cff1e" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </lrm_resource>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <lrm_resource id="Lun_iscsi2" type="SCSTLun" class="ocf" provider="nas" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <lrm_rsc_op id="Lun_iscsi2_last_0" operation_key="Lun_iscsi2_monitor_0" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.6" transition-key="18:8:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" transition-magic="0:7;18:8:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" call-id="15" rc-code="7" op-status="0" interval="0" op-digest="0f4f4ddf3c5e4da19cde1f4511e8fcd5" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </lrm_resource>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <lrm_resource id="p_PingD:1" type="ping" class="ocf" provider="pacemaker" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <lrm_rsc_op id="p_PingD:1_last_failure_0" operation_key="p_PingD:1_monitor_0" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.6" transition-key="10:2:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" transition-magic="0:0;10:2:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" call-id="4" rc-code="0" op-status="0" interval="0" op-digest="e746ac7936e48a80d701184bf3591d18" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <lrm_rsc_op id="p_PingD:1_monitor_10000" operation_key="p_PingD:1_monitor_10000" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.6" transition-key="28:3:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" transition-magic="0:0;28:3:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" call-id="6" rc-code="0" op-status="0" interval="10000" op-digest="4cbd9d437c5ab81b1238d21071f3920b" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </lrm_resource>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <lrm_resource id="p_Device_drive:1" type="drbd" class="ocf" provider="linbit" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <lrm_rsc_op id="p_Device_drive:1_last_failure_0" operation_key="p_Device_drive:1_monitor_0" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.6" transition-key="13:5:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" transition-magic="0:0;13:5:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" call-id="9" rc-code="0" op-status="0" interval="0" op-digest="dc5cb13689611f4ed203745ed603621e" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <lrm_rsc_op id="p_Device_drive:1_monitor_20000" operation_key="p_Device_drive:1_monitor_20000" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.6" transition-key="43:6:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" transition-magic="0:0;43:6:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" call-id="11" rc-code="0" op-status="0" interval="20000" op-digest="5d09870d493985952cc6e27d86f5ff38" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </lrm_resource>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <lrm_resource id="Target_iscsi1" type="SCSTTarget" class="ocf" provider="nas" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <lrm_rsc_op id="Target_iscsi1_last_0" operation_key="Target_iscsi1_monitor_0" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.6" transition-key="17:12:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" transition-magic="0:7;17:12:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" call-id="16" rc-code="7" op-status="0" interval="0" op-digest="93e8d8d60fb26c23037edf437351d0c4" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </lrm_resource>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <lrm_resource id="Target_iscsi2" type="SCSTTarget" class="ocf" provider="nas" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <lrm_rsc_op id="Target_iscsi2_last_0" operation_key="Target_iscsi2_monitor_0" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.6" transition-key="17:8:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" transition-magic="0:7;17:8:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" call-id="14" rc-code="7" op-status="0" interval="0" op-digest="0516a2943aa1d2f676d2f328f6169092" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </lrm_resource>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <lrm_resource id="p_iSCSI_Daemon:1" type="iscsi-scst" class="lsb" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <lrm_rsc_op id="p_iSCSI_Daemon:1_last_failure_0" operation_key="p_iSCSI_Daemon:1_monitor_0" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.6" transition-key="9:2:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" transition-magic="0:0;9:2:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" call-id="3" rc-code="0" op-status="0" interval="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <lrm_rsc_op id="p_iSCSI_Daemon:1_monitor_30000" operation_key="p_iSCSI_Daemon:1_monitor_30000" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.6" transition-key="18:3:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" transition-magic="0:0;18:3:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66" call-id="5" rc-code="0" op-status="0" interval="30000" op-digest="4811cef7f7f94e3a35a70be7916cb2fd" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </lrm_resource>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             </lrm_resources>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           </lrm>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         </node_state>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       </status>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     </cib>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   </diff-added>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause </diff>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 228: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: join_update_complete_callback: Join update 222 complete
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by join_update_complete_callback in state: S_POLICY_ENGINE
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.13.21 -> 0.13.22 (S_POLICY_ENGINE)
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/223, version=0.13.22): ok (rc=0)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.13.22 -> 0.13.23 (S_POLICY_ENGINE)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.13.23 -> 0.13.24 (S_POLICY_ENGINE)
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/225, version=0.13.24): ok (rc=0)
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='master-p_Device_drive:0'] does not exist
Sep 10 15:25:14 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update -22 for master-p_Device_drive:0=(null) passed
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:25:14 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-pingd" name="pingd" value="100" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.13.24 -> 0.13.25 (S_POLICY_ENGINE)
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:25:14 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-probe_complete" name="probe_complete" value="true" />
Sep 10 15:25:14 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 114 for pingd=100 passed
Sep 10 15:25:14 Cluster-Server-2 attrd: [40195]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
Sep 10 15:25:14 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_Device_drive:1 (10000)
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='master-p_Device_drive:1'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[3])
Sep 10 15:25:14 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-master-p_Device_drive.1" name="master-p_Device_drive:1" value="10000" />
Sep 10 15:25:14 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: pingd (100)
Sep 10 15:25:14 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: info: unpack_config: Startup probes: enabled
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH timeout: 60000
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH of failed nodes is disabled
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Stop all active resources: false
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Default stickiness: 0
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: notice: unpack_config: On loss of CCM Quorum: Ignore
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: info: unpack_domains: Unpacking domains
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-1 is online
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-2 is online
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: LVM_drive_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource LVM_drive active on Cluster-Server-1
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: process_orphan_resource: Detected orphan resource Lun_iscsi1 on Cluster-Server-1
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: create_fake_resource: Orphan resource <primitive id="Lun_iscsi1" type="SCSTLun" class="ocf" provider="nas" />
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_Device_drive:0_last_failure_0 on Cluster-Server-1 returned 8 (master) instead of the expected value: 7 (not running)
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Operation monitor found resource p_Device_drive:0 active in master mode on Cluster-Server-1
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: process_orphan_resource: Detected orphan resource Target_iscsi1 on Cluster-Server-1
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: create_fake_resource: Orphan resource <primitive id="Target_iscsi1" type="SCSTTarget" class="ocf" provider="nas" />
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_Device_drive:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: NFS_Server [p_NFS_Server]
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: iSCSI_Daemon [p_iSCSI_Daemon]
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke_callback: Invoking the PE: query=228, ref=pe_calc-dc-1347283514-129, seq=312, quorate=1
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.13.25 -> 0.13.26 (S_POLICY_ENGINE)
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: PingD [p_PingD]
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: info: clone_print:  Master/Slave Set: Device_drive [p_Device_drive]
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:0 active on Cluster-Server-1
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:0 active on Cluster-Server-1
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: info: short_print:      Masters: [ Cluster-Server-1 ]
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: info: short_print:      Slaves: [ Cluster-Server-2 ]
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: info: native_print: LVM_drive	(ocf::nas:LVM2):	Started Cluster-Server-1
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: iSCSI_iscsi2
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: info: native_print:      Target_iscsi2	(ocf::nas:SCSTTarget):	Started Cluster-Server-1
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: info: native_print:      Lun_iscsi2	(ocf::nas:SCSTLun):	Started Cluster-Server-1
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: iSCSI_iscsi3
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: info: native_print:      Target_iscsi3	(ocf::nas:SCSTTarget):	Stopped 
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: info: native_print:      Lun_iscsi3	(ocf::nas:SCSTLun):	Stopped 
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.13.26 -> 0.13.27 (S_POLICY_ENGINE)
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:25:14 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-probe_complete" name="probe_complete" value="true" />
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:0: preferring current location (node=Cluster-Server-1, weight=1)
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:1: preferring current location (node=Cluster-Server-2, weight=1)
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_NFS_Server:0
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_NFS_Server:1
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 NFS_Server instances of a possible 2
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_iSCSI_Daemon:0
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_iSCSI_Daemon:1
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 iSCSI_Daemon instances of a possible 2
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_PingD:0
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_PingD:1
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 PingD instances of a possible 2
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_Device_drive:0
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_Device_drive:1
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 Device_drive instances of a possible 2
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:0 master score: 10150
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: info: master_color: Promoting p_Device_drive:0 (Master Cluster-Server-1)
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:1 master score: 10100
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: info: master_color: Device_drive: Promoted 1 instances of a possible 1 to master
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to LVM_drive
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Target_iscsi2
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Lun_iscsi2
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Target_iscsi3
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Lun_iscsi3
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Lun_iscsi1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Lun_iscsi1
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Target_iscsi1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Target_iscsi1
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: native_create_probe: Probing Target_iscsi3 on Cluster-Server-1 (Stopped)
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: native_create_probe: Probing Lun_iscsi3 on Cluster-Server-1 (Stopped)
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: native_create_probe: Probing Target_iscsi3 on Cluster-Server-2 (Stopped)
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: native_create_probe: Probing Lun_iscsi3 on Cluster-Server-2 (Stopped)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.13.27 -> 0.13.28 (S_POLICY_ENGINE)
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: debug: master_create_actions: Creating actions for Device_drive
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: info: RecurringOp:  Start recurring monitor (10s) for Target_iscsi3 on Cluster-Server-1
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: info: RecurringOp:  Start recurring monitor (10s) for Lun_iscsi3 on Cluster-Server-1
Sep 10 15:25:14 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:0	(Started Cluster-Server-1)
Sep 10 15:25:14 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-pingd" name="pingd" value="100" />
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:1	(Started Cluster-Server-2)
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:0	(Started Cluster-Server-1)
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:1	(Started Cluster-Server-2)
Sep 10 15:25:14 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 116 for probe_complete=true passed
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:0	(Started Cluster-Server-1)
Sep 10 15:25:14 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 118 for master-p_Device_drive:1=10000 passed
Sep 10 15:25:14 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 120 for probe_complete=true passed
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:1	(Started Cluster-Server-2)
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_Device_drive:0	(Master Cluster-Server-1)
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_Device_drive:1	(Slave Cluster-Server-2)
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   LVM_drive	(Started Cluster-Server-1)
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Target_iscsi2	(Started Cluster-Server-1)
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Lun_iscsi2	(Started Cluster-Server-1)
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: notice: LogActions: Start   Target_iscsi3	(Cluster-Server-1)
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: notice: LogActions: Start   Lun_iscsi3	(Cluster-Server-1)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.13.28 -> 0.13.29 (S_POLICY_ENGINE)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_PE_SUCCESS from handle_response() received in state S_POLICY_ENGINE
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log <create_reply_adv origin="process_pe_message" t="crmd" version="3.0.6" subt="response" reference="pe_calc-dc-1347283514-129" crm_task="pe_calc" crm_sys_to="dc" crm_sys_from="pengine" crm-tgraph-in="/var/lib/pengine/pe-input-14.bz2" graph-errors="0" graph-warnings="0" config-errors="0" config-warnings="0" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   <crm_xml >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log     <transition_graph cluster-delay="60s" stonith-timeout="60s" failed-stop-offset="INFINITY" failed-start-offset="INFINITY" batch-limit="30" transition_id="14" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="0" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="89" operation="running" operation_key="iSCSI_iscsi3_running_0" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="84" operation="start" operation_key="Target_iscsi3_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="86" operation="start" operation_key="Lun_iscsi3_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="88" operation="start" operation_key="iSCSI_iscsi3_start_0" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="1" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="88" operation="start" operation_key="iSCSI_iscsi3_start_0" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="2" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="85" operation="monitor" operation_key="Target_iscsi3_monitor_10000" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="Target_iscsi3" long-id="iSCSI_iscsi3:Target_iscsi3" class="ocf" provider="nas" type="SCSTTarget" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_interval="10000" CRM_meta_name="monitor" CRM_meta_timeout="60000" crm_feature_set="3.0.6" iqn="iqn.2005-07.com.example:vdisk.iscsi3" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="84" operation="start" operation_key="Target_iscsi3_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="3" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="84" operation="start" operation_key="Target_iscsi3_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="Target_iscsi3" long-id="iSCSI_iscsi3:Target_iscsi3" class="ocf" provider="nas" type="SCSTTarget" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_name="start" CRM_meta_timeout="240000" crm_feature_set="3.0.6" iqn="iqn.2005-07.com.example:vdisk.iscsi3" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="12" operation="probe_complete" operation_key="probe_complete" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="88" operation="start" operation_key="iSCSI_iscsi3_start_0" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="4" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="17" operation="monitor" operation_key="Target_iscsi3_monitor_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="Target_iscsi3" long-id="iSCSI_iscsi3:Target_iscsi3" class="ocf" provider="nas" type="SCSTTarget" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" crm_feature_set="3.0.6" iqn="iqn.2005-07.com.example:vdisk.iscsi3" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="5" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="14" operation="monitor" operation_key="Target_iscsi3_monitor_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="Target_iscsi3" long-id="iSCSI_iscsi3:Target_iscsi3" class="ocf" provider="nas" type="SCSTTarget" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" crm_feature_set="3.0.6" iqn="iqn.2005-07.com.example:vdisk.iscsi3" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="6" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="87" operation="monitor" operation_key="Lun_iscsi3_monitor_10000" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="Lun_iscsi3" long-id="iSCSI_iscsi3:Lun_iscsi3" class="ocf" provider="nas" type="SCSTLun" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_interval="10000" CRM_meta_name="monitor" CRM_meta_timeout="20000" crm_feature_set="3.0.6" device_name="iscsi3" handler="vdisk_blockio" lun="0" path="/dev/drive-CSD/iscsi3_iSCSI" target_iqn="iqn.2005-07.com.example:vdisk.iscsi3" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="86" operation="start" operation_key="Lun_iscsi3_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="7" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="86" operation="start" operation_key="Lun_iscsi3_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="Lun_iscsi3" long-id="iSCSI_iscsi3:Lun_iscsi3" class="ocf" provider="nas" type="SCSTLun" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_name="start" CRM_meta_timeout="60000" crm_feature_set="3.0.6" device_name="iscsi3" handler="vdisk_blockio" lun="0" path="/dev/drive-CSD/iscsi3_iSCSI" target_iqn="iqn.2005-07.com.example:vdisk.iscsi3" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="12" operation="probe_complete" operation_key="probe_complete" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="84" operation="start" operation_key="Target_iscsi3_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="88" operation="start" operation_key="iSCSI_iscsi3_start_0" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="8" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="18" operation="monitor" operation_key="Lun_iscsi3_monitor_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="Lun_iscsi3" long-id="iSCSI_iscsi3:Lun_iscsi3" class="ocf" provider="nas" type="SCSTLun" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" crm_feature_set="3.0.6" device_name="iscsi3" handler="vdisk_blockio" lun="0" path="/dev/drive-CSD/iscsi3_iSCSI" target_iqn="iqn.2005-07.com.example:vdisk.iscsi3" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="9" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="15" operation="monitor" operation_key="Lun_iscsi3_monitor_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="Lun_iscsi3" long-id="iSCSI_iscsi3:Lun_iscsi3" class="ocf" provider="nas" type="SCSTLun" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" crm_feature_set="3.0.6" device_name="iscsi3" handler="vdisk_blockio" lun="0" path="/dev/drive-CSD/iscsi3_iSCSI" target_iqn="iqn.2005-07.com.example:vdisk.iscsi3" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="10" priority="1000000" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="16" operation="probe_complete" operation_key="probe_complete" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_no_wait="true" crm_feature_set="3.0.6" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="17" operation="monitor" operation_key="Target_iscsi3_monitor_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="18" operation="monitor" operation_key="Lun_iscsi3_monitor_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="11" priority="1000000" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="13" operation="probe_complete" operation_key="probe_complete" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_no_wait="true" crm_feature_set="3.0.6" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="14" operation="monitor" operation_key="Target_iscsi3_monitor_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="15" operation="monitor" operation_key="Lun_iscsi3_monitor_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="12" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="12" operation="probe_complete" operation_key="probe_complete" >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes crm_feature_set="3.0.6" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="13" operation="probe_complete" operation_key="probe_complete" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="16" operation="probe_complete" operation_key="probe_complete" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" />
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log     </transition_graph>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   </crm_xml>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log </create_reply_adv>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_INVOKE
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: unpack_graph: Unpacked transition 14: 13 actions in 13 synapses
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: info: do_te_invoke: Processing graph 14 (ref=pe_calc-dc-1347283514-129) derived from /var/lib/pengine/pe-input-14.bz2
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: te_pseudo_action: Pseudo action 88 fired and confirmed
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 17: monitor Target_iscsi3_monitor_0 on Cluster-Server-2 (local)
Sep 10 15:25:14 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 122 for pingd=100 passed
Sep 10 15:25:14 Cluster-Server-2 lrmd: [40194]: debug: on_msg_add_rsc:client [40197] adds resource Target_iscsi3
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_lrm_rsc_op: Performing key=17:14:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=Target_iscsi3_monitor_0
Sep 10 15:25:14 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op:2399: copying parameters for rsc Target_iscsi3
Sep 10 15:25:14 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op: add an operation operation monitor[18] on Target_iscsi3 for client 40197, its parameters: crm_feature_set=[3.0.6] iqn=[iqn.2005-07.com.example:vdisk.iscsi3] CRM_meta_timeout=[20000]  to the operation list.
Sep 10 15:25:14 Cluster-Server-2 lrmd: [40194]: info: rsc:Target_iscsi3 probe[18] (pid 60017)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 14: monitor Target_iscsi3_monitor_0 on Cluster-Server-1
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 18: monitor Lun_iscsi3_monitor_0 on Cluster-Server-2 (local)
Sep 10 15:25:14 Cluster-Server-2 lrmd: [40194]: debug: on_msg_add_rsc:client [40197] adds resource Lun_iscsi3
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_lrm_rsc_op: Performing key=18:14:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=Lun_iscsi3_monitor_0
Sep 10 15:25:14 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op:2399: copying parameters for rsc Lun_iscsi3
Sep 10 15:25:14 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op: add an operation operation monitor[19] on Lun_iscsi3 for client 40197, its parameters: path=[/dev/drive-CSD/iscsi3_iSCSI] crm_feature_set=[3.0.6] lun=[0] handler=[vdisk_blockio] device_name=[iscsi3] CRM_meta_timeout=[20000] target_iqn=[iqn.2005-07.com.example:vdisk.iscsi3]  to the operation list.
Sep 10 15:25:14 Cluster-Server-2 lrmd: [40194]: info: rsc:Lun_iscsi3 probe[19] (pid 60018)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 15: monitor Lun_iscsi3_monitor_0 on Cluster-Server-1
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 14 (Complete=0, Pending=4, Fired=5, Skipped=0, Incomplete=8, Source=/var/lib/pengine/pe-input-14.bz2): In-progress
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 14 (Complete=1, Pending=4, Fired=0, Skipped=0, Incomplete=8, Source=/var/lib/pengine/pe-input-14.bz2): In-progress
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.13.29 -> 0.13.30 (S_TRANSITION_ENGINE)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.13.30 -> 0.13.31 (S_TRANSITION_ENGINE)
SCSTTarget(Target_iscsi3)[60017]:	2012/09/10_15:25:14 DEBUG: Target_iscsi3 monitor : 7
Sep 10 15:25:14 Cluster-Server-2 lrmd: [40194]: WARN: Managed Target_iscsi3:monitor process 60017 exited with return code 7.
Sep 10 15:25:14 Cluster-Server-2 lrmd: [40194]: info: operation monitor[18] on Target_iscsi3 for client 40197: pid 60017 exited with return code 7
Sep 10 15:25:14 Cluster-Server-2 pengine: [40196]: notice: process_pe_message: Transition 14: PEngine Input stored in: /var/lib/pengine/pe-input-14.bz2
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: do_update_resource: Updating resouce Target_iscsi3 after complete monitor op (interval=0)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: info: process_lrm_event: LRM operation Target_iscsi3_monitor_0 (call=18, rc=7, cib-update=229, confirmed=true) not running
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: update_history_cache: Appending monitor op to history for 'Target_iscsi3'
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.13.31 -> 0.13.32 (S_TRANSITION_ENGINE)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action Target_iscsi3_monitor_0 (17) confirmed on Cluster-Server-2 (rc=0)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 14 (Complete=2, Pending=3, Fired=0, Skipped=0, Incomplete=8, Source=/var/lib/pengine/pe-input-14.bz2): In-progress
SCSTLun(Lun_iscsi3)[60018]:	2012/09/10_15:25:14 INFO: Lun_iscsi3 monitor : 7
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.13.32 -> 0.13.33 (S_TRANSITION_ENGINE)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action Target_iscsi3_monitor_0 (14) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 14 (Complete=3, Pending=2, Fired=0, Skipped=0, Incomplete=8, Source=/var/lib/pengine/pe-input-14.bz2): In-progress
SCSTLun(Lun_iscsi3)[60018]:	2012/09/10_15:25:14 INFO: Lun_iscsi3 monitor : 7
Sep 10 15:25:14 Cluster-Server-2 lrmd: [40194]: WARN: Managed Lun_iscsi3:monitor process 60018 exited with return code 7.
Sep 10 15:25:14 Cluster-Server-2 lrmd: [40194]: info: operation monitor[19] on Lun_iscsi3 for client 40197: pid 60018 exited with return code 7
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: do_update_resource: Updating resouce Lun_iscsi3 after complete monitor op (interval=0)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: info: process_lrm_event: LRM operation Lun_iscsi3_monitor_0 (call=19, rc=7, cib-update=230, confirmed=true) not running
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: update_history_cache: Appending monitor op to history for 'Lun_iscsi3'
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.13.33 -> 0.13.34 (S_TRANSITION_ENGINE)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action Lun_iscsi3_monitor_0 (18) confirmed on Cluster-Server-2 (rc=0)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 16: probe_complete probe_complete on Cluster-Server-2 (local) - no waiting
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: attrd_update_delegate: Sent update: probe_complete=true for localhost
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 14 (Complete=4, Pending=1, Fired=1, Skipped=0, Incomplete=7, Source=/var/lib/pengine/pe-input-14.bz2): In-progress
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 14 (Complete=5, Pending=1, Fired=0, Skipped=0, Incomplete=7, Source=/var/lib/pengine/pe-input-14.bz2): In-progress
Sep 10 15:25:14 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from crmd: probe_complete=true
Sep 10 15:25:14 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: true, Current: true, Stored: true
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.13.34 -> 0.13.35 (S_TRANSITION_ENGINE)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action Lun_iscsi3_monitor_0 (15) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 13: probe_complete probe_complete on Cluster-Server-1 - no waiting
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: te_pseudo_action: Pseudo action 12 fired and confirmed
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 14 (Complete=6, Pending=0, Fired=2, Skipped=0, Incomplete=5, Source=/var/lib/pengine/pe-input-14.bz2): In-progress
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 84: start Target_iscsi3_start_0 on Cluster-Server-1
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 14 (Complete=8, Pending=1, Fired=1, Skipped=0, Incomplete=4, Source=/var/lib/pengine/pe-input-14.bz2): In-progress
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.13.35 -> 0.13.36 (S_TRANSITION_ENGINE)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action Target_iscsi3_start_0 (84) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 85: monitor Target_iscsi3_monitor_10000 on Cluster-Server-1
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 86: start Lun_iscsi3_start_0 on Cluster-Server-1
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 14 (Complete=9, Pending=2, Fired=2, Skipped=0, Incomplete=2, Source=/var/lib/pengine/pe-input-14.bz2): In-progress
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.13.36 -> 0.13.37 (S_TRANSITION_ENGINE)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action Target_iscsi3_monitor_10000 (85) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 14 (Complete=10, Pending=1, Fired=0, Skipped=0, Incomplete=2, Source=/var/lib/pengine/pe-input-14.bz2): In-progress
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.13.37 -> 0.13.38 (S_TRANSITION_ENGINE)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action Lun_iscsi3_start_0 (86) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: te_pseudo_action: Pseudo action 89 fired and confirmed
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 87: monitor Lun_iscsi3_monitor_10000 on Cluster-Server-1
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 14 (Complete=11, Pending=1, Fired=2, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-14.bz2): In-progress
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 14 (Complete=12, Pending=1, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-14.bz2): In-progress
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.13.38 -> 0.13.39 (S_TRANSITION_ENGINE)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action Lun_iscsi3_monitor_10000 (87) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: notice: run_graph: ==== Transition 14 (Complete=13, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-14.bz2): Complete
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: te_graph_trigger: Transition 14 is now complete
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Transition 14 status: done - <null>
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_TE_SUCCESS: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_TE_SUCCESS from notify_crmd() received in state S_TRANSITION_ENGINE
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: Starting PEngine Recheck Timer
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started PEngine Recheck Timer (I_PE_CALC:900000ms), src=270
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:25:14 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:25:15 Cluster-Server-1 lrmd: [48712]: debug: rsc:Target_iscsi2 monitor[21] (pid 55896)
Sep 10 15:25:15 Cluster-Server-1 lrmd: [48712]: debug: rsc:Lun_iscsi2 monitor[23] (pid 55897)
SCSTTarget(Target_iscsi2)[55896]:	2012/09/10_15:25:15 DEBUG: Target_iscsi2 monitor : 0
Sep 10 15:25:15 Cluster-Server-1 lrmd: [48712]: info: operation monitor[21] on Target_iscsi2 for client 48715: pid 55896 exited with return code 0
SCSTLun(Lun_iscsi2)[55897]:	2012/09/10_15:25:15 INFO: Lun_iscsi2 monitor : 0
SCSTLun(Lun_iscsi2)[55897]:	2012/09/10_15:25:15 INFO: Lun_iscsi2 monitor : 0
Sep 10 15:25:15 Cluster-Server-1 lrmd: [48712]: info: operation monitor[23] on Lun_iscsi2 for client 48715: pid 55897 exited with return code 0
Sep 10 15:25:15 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_NFS_Server:0 monitor[8] (pid 55910)
Sep 10 15:25:15 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_iSCSI_Daemon:0 monitor[5] (pid 55911)
Sep 10 15:25:15 Cluster-Server-1 lrmd: [48712]: info: operation monitor[8] on p_NFS_Server:0 for client 48715: pid 55910 exited with return code 0
Sep 10 15:25:15 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_NFS_Server:0:monitor:stdout) nfsd running

Sep 10 15:25:15 Cluster-Server-1 lrmd: [48712]: info: operation monitor[5] on p_iSCSI_Daemon:0 for client 48715: pid 55911 exited with return code 0
Sep 10 15:25:15 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_NFS_Server:1 monitor[8] (pid 60284)
Sep 10 15:25:15 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_iSCSI_Daemon:1 monitor[5] (pid 60285)
Sep 10 15:25:15 Cluster-Server-2 lrmd: [40194]: info: operation monitor[8] on p_NFS_Server:1 for client 40197: pid 60284 exited with return code 0
Sep 10 15:25:15 Cluster-Server-2 lrmd: [40194]: debug: RA output: (p_NFS_Server:1:monitor:stdout) nfsd running

Sep 10 15:25:15 Cluster-Server-2 lrmd: [40194]: info: operation monitor[5] on p_iSCSI_Daemon:1 for client 40197: pid 60285 exited with return code 0
Sep 10 15:25:18 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_Device_drive:0 monitor[11] (pid 55921)
drbd(p_Device_drive:0)[55921]:	2012/09/10_15:25:18 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:25:18 Cluster-Server-1 crm_attribute: [55951]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:25:18 Cluster-Server-1 crm_attribute: [55951]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:25:18 Cluster-Server-1 crm_attribute: [55951]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:25:18 Cluster-Server-1 crm_attribute: [55951]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:25:18 Cluster-Server-1 crm_attribute: [55951]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:25:18 Cluster-Server-1 crm_attribute: [55951]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:25:18 Cluster-Server-1 crm_attribute: [55951]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:25:18 Cluster-Server-1 crm_attribute: [55951]: info: determine_host: Mapped Cluster-Server-1 to Cluster-Server-1
Sep 10 15:25:18 Cluster-Server-1 crm_attribute: [55951]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:25:18 Cluster-Server-1 crm_attribute: [55951]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:25:18 Cluster-Server-1 crm_attribute: [55951]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:25:18 Cluster-Server-1 crm_attribute: [55951]: info: attrd_lazy_update: Updated master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:25:18 Cluster-Server-1 crm_attribute: [55951]: info: main: Update master-p_Device_drive:0=10000 sent via attrd
Sep 10 15:25:18 Cluster-Server-1 crm_attribute: [55951]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:25:18 Cluster-Server-1 crm_attribute: [55951]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:25:18 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:0=10000
Sep 10 15:25:18 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:0)[55921]:	2012/09/10_15:25:18 DEBUG: drive: Exit code 0
drbd(p_Device_drive:0)[55921]:	2012/09/10_15:25:18 DEBUG: drive: Command output: 
Sep 10 15:25:18 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_Device_drive:0:monitor:stdout) 

Sep 10 15:25:18 Cluster-Server-1 lrmd: [48712]: info: operation monitor[11] on p_Device_drive:0 for client 48715: pid 55921 exited with return code 8
Sep 10 15:25:22 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_PingD:0 monitor[6] (pid 56117)
Sep 10 15:25:22 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_PingD:1 monitor[6] (pid 60752)
Sep 10 15:25:24 Cluster-Server-1 attrd_updater: [56137]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:25:24 Cluster-Server-1 attrd_updater: [56137]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:25:24 Cluster-Server-1 attrd_updater: [56137]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:25:24 Cluster-Server-1 attrd_updater: [56137]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:25:24 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:25:24 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:25:24 Cluster-Server-1 lrmd: [48712]: info: operation monitor[6] on p_PingD:0 for client 48715: pid 56117 exited with return code 0
Sep 10 15:25:24 Cluster-Server-1 lrmd: [48712]: debug: rsc:Target_iscsi3 monitor[31] (pid 56138)
SCSTTarget(Target_iscsi3)[56138]:	2012/09/10_15:25:24 DEBUG: Target_iscsi3 monitor : 0
Sep 10 15:25:24 Cluster-Server-1 lrmd: [48712]: info: operation monitor[31] on Target_iscsi3 for client 48715: pid 56138 exited with return code 0
Sep 10 15:25:24 Cluster-Server-1 lrmd: [48712]: debug: rsc:Lun_iscsi3 monitor[33] (pid 56144)
SCSTLun(Lun_iscsi3)[56144]:	2012/09/10_15:25:24 INFO: Lun_iscsi3 monitor : 0
SCSTLun(Lun_iscsi3)[56144]:	2012/09/10_15:25:24 INFO: Lun_iscsi3 monitor : 0
Sep 10 15:25:24 Cluster-Server-1 lrmd: [48712]: info: operation monitor[33] on Lun_iscsi3 for client 48715: pid 56144 exited with return code 0
Sep 10 15:25:24 Cluster-Server-2 attrd_updater: [60803]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:25:24 Cluster-Server-2 attrd_updater: [60803]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:25:24 Cluster-Server-2 attrd_updater: [60803]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:25:24 Cluster-Server-2 attrd_updater: [60803]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:25:24 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:25:24 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:25:24 Cluster-Server-2 lrmd: [40194]: info: operation monitor[6] on p_PingD:1 for client 40197: pid 60752 exited with return code 0
Sep 10 15:25:25 Cluster-Server-1 lrmd: [48712]: debug: rsc:Target_iscsi2 monitor[21] (pid 56152)
Sep 10 15:25:25 Cluster-Server-1 lrmd: [48712]: debug: rsc:Lun_iscsi2 monitor[23] (pid 56153)
SCSTTarget(Target_iscsi2)[56152]:	2012/09/10_15:25:25 DEBUG: Target_iscsi2 monitor : 0
Sep 10 15:25:25 Cluster-Server-1 lrmd: [48712]: info: operation monitor[21] on Target_iscsi2 for client 48715: pid 56152 exited with return code 0
SCSTLun(Lun_iscsi2)[56153]:	2012/09/10_15:25:25 INFO: Lun_iscsi2 monitor : 0
SCSTLun(Lun_iscsi2)[56153]:	2012/09/10_15:25:25 INFO: Lun_iscsi2 monitor : 0
Sep 10 15:25:25 Cluster-Server-1 lrmd: [48712]: info: operation monitor[23] on Lun_iscsi2 for client 48715: pid 56153 exited with return code 0
Sep 10 15:25:27 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_Device_drive:1 monitor[11] (pid 61086)
drbd(p_Device_drive:1)[61086]:	2012/09/10_15:25:27 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:25:27 Cluster-Server-2 crm_attribute: [61116]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:25:27 Cluster-Server-2 crm_attribute: [61116]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:25:27 Cluster-Server-2 crm_attribute: [61116]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:25:27 Cluster-Server-2 crm_attribute: [61116]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:25:27 Cluster-Server-2 crm_attribute: [61116]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:25:27 Cluster-Server-2 crm_attribute: [61116]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:25:27 Cluster-Server-2 crm_attribute: [61116]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:25:27 Cluster-Server-2 crm_attribute: [61116]: info: determine_host: Mapped Cluster-Server-2 to Cluster-Server-2
Sep 10 15:25:27 Cluster-Server-2 crm_attribute: [61116]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:25:27 Cluster-Server-2 crm_attribute: [61116]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:25:27 Cluster-Server-2 crm_attribute: [61116]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:1=10000 for Cluster-Server-2
Sep 10 15:25:27 Cluster-Server-2 crm_attribute: [61116]: info: attrd_lazy_update: Updated master-p_Device_drive:1=10000 for Cluster-Server-2
Sep 10 15:25:27 Cluster-Server-2 crm_attribute: [61116]: info: main: Update master-p_Device_drive:1=10000 sent via attrd
Sep 10 15:25:27 Cluster-Server-2 crm_attribute: [61116]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:25:27 Cluster-Server-2 crm_attribute: [61116]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:25:27 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:1=10000
Sep 10 15:25:27 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:1)[61086]:	2012/09/10_15:25:28 DEBUG: drive: Exit code 0
drbd(p_Device_drive:1)[61086]:	2012/09/10_15:25:28 DEBUG: drive: Command output: 
Sep 10 15:25:28 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_Device_drive:0 monitor[11] (pid 56166)
drbd(p_Device_drive:0)[56166]:	2012/09/10_15:25:28 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:25:28 Cluster-Server-1 crm_attribute: [56196]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:25:28 Cluster-Server-1 crm_attribute: [56196]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:25:28 Cluster-Server-1 crm_attribute: [56196]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:25:28 Cluster-Server-1 crm_attribute: [56196]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:25:28 Cluster-Server-1 crm_attribute: [56196]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:25:28 Cluster-Server-1 crm_attribute: [56196]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:25:28 Cluster-Server-1 crm_attribute: [56196]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:25:28 Cluster-Server-1 crm_attribute: [56196]: info: determine_host: Mapped Cluster-Server-1 to Cluster-Server-1
Sep 10 15:25:28 Cluster-Server-1 crm_attribute: [56196]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:25:28 Cluster-Server-1 crm_attribute: [56196]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:25:28 Cluster-Server-1 crm_attribute: [56196]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:25:28 Cluster-Server-1 crm_attribute: [56196]: info: attrd_lazy_update: Updated master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:25:28 Cluster-Server-1 crm_attribute: [56196]: info: main: Update master-p_Device_drive:0=10000 sent via attrd
Sep 10 15:25:28 Cluster-Server-1 crm_attribute: [56196]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:25:28 Cluster-Server-1 crm_attribute: [56196]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:25:28 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:0=10000
Sep 10 15:25:28 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:0)[56166]:	2012/09/10_15:25:28 DEBUG: drive: Exit code 0
drbd(p_Device_drive:0)[56166]:	2012/09/10_15:25:28 DEBUG: drive: Command output: 
Sep 10 15:25:28 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_Device_drive:0:monitor:stdout) 

Sep 10 15:25:28 Cluster-Server-1 lrmd: [48712]: info: operation monitor[11] on p_Device_drive:0 for client 48715: pid 56166 exited with return code 8
Sep 10 15:25:28 Cluster-Server-2 lrmd: [40194]: debug: RA output: (p_Device_drive:1:monitor:stdout) 

Sep 10 15:25:28 Cluster-Server-2 lrmd: [40194]: info: operation monitor[11] on p_Device_drive:1 for client 40197: pid 61086 exited with return code 0
Sep 10 15:25:34 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_PingD:0 monitor[6] (pid 56203)
Sep 10 15:25:34 Cluster-Server-1 lrmd: [48712]: debug: rsc:Target_iscsi3 monitor[31] (pid 56219)
SCSTTarget(Target_iscsi3)[56219]:	2012/09/10_15:25:34 DEBUG: Target_iscsi3 monitor : 0
Sep 10 15:25:34 Cluster-Server-1 lrmd: [48712]: info: operation monitor[31] on Target_iscsi3 for client 48715: pid 56219 exited with return code 0
Sep 10 15:25:34 Cluster-Server-1 lrmd: [48712]: debug: rsc:Lun_iscsi3 monitor[33] (pid 56225)
SCSTLun(Lun_iscsi3)[56225]:	2012/09/10_15:25:34 INFO: Lun_iscsi3 monitor : 0
SCSTLun(Lun_iscsi3)[56225]:	2012/09/10_15:25:34 INFO: Lun_iscsi3 monitor : 0
Sep 10 15:25:34 Cluster-Server-1 lrmd: [48712]: info: operation monitor[33] on Lun_iscsi3 for client 48715: pid 56225 exited with return code 0
Sep 10 15:25:34 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_PingD:1 monitor[6] (pid 61542)
Sep 10 15:25:35 Cluster-Server-1 lrmd: [48712]: debug: rsc:Target_iscsi2 monitor[21] (pid 56233)
Sep 10 15:25:35 Cluster-Server-1 lrmd: [48712]: debug: rsc:Lun_iscsi2 monitor[23] (pid 56234)
SCSTTarget(Target_iscsi2)[56233]:	2012/09/10_15:25:35 DEBUG: Target_iscsi2 monitor : 0
Sep 10 15:25:35 Cluster-Server-1 lrmd: [48712]: info: operation monitor[21] on Target_iscsi2 for client 48715: pid 56233 exited with return code 0
SCSTLun(Lun_iscsi2)[56234]:	2012/09/10_15:25:35 INFO: Lun_iscsi2 monitor : 0
SCSTLun(Lun_iscsi2)[56234]:	2012/09/10_15:25:35 INFO: Lun_iscsi2 monitor : 0
Sep 10 15:25:35 Cluster-Server-1 lrmd: [48712]: info: operation monitor[23] on Lun_iscsi2 for client 48715: pid 56234 exited with return code 0
Sep 10 15:25:36 Cluster-Server-1 attrd_updater: [56249]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:25:36 Cluster-Server-1 attrd_updater: [56249]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:25:36 Cluster-Server-1 attrd_updater: [56249]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:25:36 Cluster-Server-1 attrd_updater: [56249]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:25:36 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:25:36 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:25:36 Cluster-Server-1 lrmd: [48712]: info: operation monitor[6] on p_PingD:0 for client 48715: pid 56203 exited with return code 0
Sep 10 15:25:36 Cluster-Server-2 attrd_updater: [61807]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:25:36 Cluster-Server-2 attrd_updater: [61807]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:25:36 Cluster-Server-2 attrd_updater: [61807]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:25:36 Cluster-Server-2 attrd_updater: [61807]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:25:36 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:25:36 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:25:36 Cluster-Server-2 lrmd: [40194]: info: operation monitor[6] on p_PingD:1 for client 40197: pid 61542 exited with return code 0
Sep 10 15:25:38 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_Device_drive:0 monitor[11] (pid 56250)
drbd(p_Device_drive:0)[56250]:	2012/09/10_15:25:38 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:25:38 Cluster-Server-1 crm_attribute: [56280]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:25:38 Cluster-Server-1 crm_attribute: [56280]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:25:38 Cluster-Server-1 crm_attribute: [56280]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:25:38 Cluster-Server-1 crm_attribute: [56280]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:25:38 Cluster-Server-1 crm_attribute: [56280]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:25:38 Cluster-Server-1 crm_attribute: [56280]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:25:38 Cluster-Server-1 crm_attribute: [56280]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:25:38 Cluster-Server-1 crm_attribute: [56280]: info: determine_host: Mapped Cluster-Server-1 to Cluster-Server-1
Sep 10 15:25:38 Cluster-Server-1 crm_attribute: [56280]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:25:38 Cluster-Server-1 crm_attribute: [56280]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:25:38 Cluster-Server-1 crm_attribute: [56280]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:25:38 Cluster-Server-1 crm_attribute: [56280]: info: attrd_lazy_update: Updated master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:25:38 Cluster-Server-1 crm_attribute: [56280]: info: main: Update master-p_Device_drive:0=10000 sent via attrd
Sep 10 15:25:38 Cluster-Server-1 crm_attribute: [56280]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:25:38 Cluster-Server-1 crm_attribute: [56280]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:25:38 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:0=10000
Sep 10 15:25:38 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:0)[56250]:	2012/09/10_15:25:38 DEBUG: drive: Exit code 0
drbd(p_Device_drive:0)[56250]:	2012/09/10_15:25:38 DEBUG: drive: Command output: 
Sep 10 15:25:38 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_Device_drive:0:monitor:stdout) 

Sep 10 15:25:38 Cluster-Server-1 lrmd: [48712]: info: operation monitor[11] on p_Device_drive:0 for client 48715: pid 56250 exited with return code 8
Sep 10 15:25:41 Cluster-Server-1 cib: [48709]: info: crm_signal_dispatch: Invoking handler for signal 13: Broken pipe
Sep 10 15:25:41 Cluster-Server-1 cib: [48709]: info: cib_enable_writes: (Re)enabling disk writes
Sep 10 15:25:44 Cluster-Server-1 lrmd: [48712]: debug: rsc:Target_iscsi3 monitor[31] (pid 56289)
SCSTTarget(Target_iscsi3)[56289]:	2012/09/10_15:25:44 DEBUG: Target_iscsi3 monitor : 0
Sep 10 15:25:44 Cluster-Server-1 lrmd: [48712]: info: operation monitor[31] on Target_iscsi3 for client 48715: pid 56289 exited with return code 0
Sep 10 15:25:44 Cluster-Server-1 lrmd: [48712]: debug: rsc:Lun_iscsi3 monitor[33] (pid 56295)
SCSTLun(Lun_iscsi3)[56295]:	2012/09/10_15:25:44 INFO: Lun_iscsi3 monitor : 0
SCSTLun(Lun_iscsi3)[56295]:	2012/09/10_15:25:44 INFO: Lun_iscsi3 monitor : 0
Sep 10 15:25:44 Cluster-Server-1 lrmd: [48712]: info: operation monitor[33] on Lun_iscsi3 for client 48715: pid 56295 exited with return code 0
Sep 10 15:25:45 Cluster-Server-1 lrmd: [48712]: debug: rsc:Target_iscsi2 monitor[21] (pid 56303)
Sep 10 15:25:45 Cluster-Server-1 lrmd: [48712]: debug: rsc:Lun_iscsi2 monitor[23] (pid 56304)
Sep 10 15:25:45 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_NFS_Server:0 monitor[8] (pid 56305)
Sep 10 15:25:45 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_iSCSI_Daemon:0 monitor[5] (pid 56306)
Sep 10 15:25:45 Cluster-Server-1 lrmd: [48712]: info: operation monitor[8] on p_NFS_Server:0 for client 48715: pid 56305 exited with return code 0
Sep 10 15:25:45 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_NFS_Server:0:monitor:stdout) nfsd running

SCSTTarget(Target_iscsi2)[56303]:	2012/09/10_15:25:45 DEBUG: Target_iscsi2 monitor : 0
Sep 10 15:25:45 Cluster-Server-1 lrmd: [48712]: info: operation monitor[21] on Target_iscsi2 for client 48715: pid 56303 exited with return code 0
Sep 10 15:25:45 Cluster-Server-1 lrmd: [48712]: info: operation monitor[5] on p_iSCSI_Daemon:0 for client 48715: pid 56306 exited with return code 0
SCSTLun(Lun_iscsi2)[56304]:	2012/09/10_15:25:45 INFO: Lun_iscsi2 monitor : 0
SCSTLun(Lun_iscsi2)[56304]:	2012/09/10_15:25:45 INFO: Lun_iscsi2 monitor : 0
Sep 10 15:25:45 Cluster-Server-1 lrmd: [48712]: info: operation monitor[23] on Lun_iscsi2 for client 48715: pid 56304 exited with return code 0
Sep 10 15:25:45 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_NFS_Server:1 monitor[8] (pid 62549)
Sep 10 15:25:45 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_iSCSI_Daemon:1 monitor[5] (pid 62550)
Sep 10 15:25:45 Cluster-Server-2 lrmd: [40194]: info: operation monitor[8] on p_NFS_Server:1 for client 40197: pid 62549 exited with return code 0
Sep 10 15:25:45 Cluster-Server-2 lrmd: [40194]: debug: RA output: (p_NFS_Server:1:monitor:stdout) nfsd running

Sep 10 15:25:45 Cluster-Server-2 lrmd: [40194]: info: operation monitor[5] on p_iSCSI_Daemon:1 for client 40197: pid 62550 exited with return code 0
Sep 10 15:25:46 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_PingD:0 monitor[6] (pid 56323)
Sep 10 15:25:46 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_PingD:1 monitor[6] (pid 62555)
Sep 10 15:25:46 Cluster-Server-2 cib: [40192]: info: crm_signal_dispatch: Invoking handler for signal 13: Broken pipe
Sep 10 15:25:46 Cluster-Server-2 cib: [40192]: info: cib_enable_writes: (Re)enabling disk writes
Sep 10 15:25:48 Cluster-Server-1 attrd_updater: [56341]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:25:48 Cluster-Server-1 attrd_updater: [56341]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:25:48 Cluster-Server-1 attrd_updater: [56341]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:25:48 Cluster-Server-1 attrd_updater: [56341]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:25:48 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:25:48 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:25:48 Cluster-Server-1 lrmd: [48712]: info: operation monitor[6] on p_PingD:0 for client 48715: pid 56323 exited with return code 0
Sep 10 15:25:48 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_Device_drive:0 monitor[11] (pid 56392)
drbd(p_Device_drive:0)[56392]:	2012/09/10_15:25:48 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:25:48 Cluster-Server-1 crm_attribute: [56452]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:25:48 Cluster-Server-1 crm_attribute: [56452]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:25:48 Cluster-Server-1 crm_attribute: [56452]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:25:48 Cluster-Server-1 crm_attribute: [56452]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:25:48 Cluster-Server-1 crm_attribute: [56452]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:25:48 Cluster-Server-1 crm_attribute: [56452]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:25:48 Cluster-Server-1 crm_attribute: [56452]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:25:48 Cluster-Server-1 crm_attribute: [56452]: info: determine_host: Mapped Cluster-Server-1 to Cluster-Server-1
Sep 10 15:25:48 Cluster-Server-1 crm_attribute: [56452]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:25:48 Cluster-Server-1 crm_attribute: [56452]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:25:48 Cluster-Server-1 crm_attribute: [56452]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:25:48 Cluster-Server-1 crm_attribute: [56452]: info: attrd_lazy_update: Updated master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:25:48 Cluster-Server-1 crm_attribute: [56452]: info: main: Update master-p_Device_drive:0=10000 sent via attrd
Sep 10 15:25:48 Cluster-Server-1 crm_attribute: [56452]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:25:48 Cluster-Server-1 crm_attribute: [56452]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:25:48 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:0=10000
Sep 10 15:25:48 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:0)[56392]:	2012/09/10_15:25:48 DEBUG: drive: Exit code 0
drbd(p_Device_drive:0)[56392]:	2012/09/10_15:25:48 DEBUG: drive: Command output: 
Sep 10 15:25:48 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_Device_drive:0:monitor:stdout) 

Sep 10 15:25:48 Cluster-Server-1 lrmd: [48712]: info: operation monitor[11] on p_Device_drive:0 for client 48715: pid 56392 exited with return code 8
Sep 10 15:25:48 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_Device_drive:1 monitor[11] (pid 62711)
drbd(p_Device_drive:1)[62711]:	2012/09/10_15:25:48 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:25:48 Cluster-Server-2 crm_attribute: [62741]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:25:48 Cluster-Server-2 crm_attribute: [62741]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:25:48 Cluster-Server-2 crm_attribute: [62741]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:25:48 Cluster-Server-2 crm_attribute: [62741]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:25:48 Cluster-Server-2 crm_attribute: [62741]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:25:48 Cluster-Server-2 crm_attribute: [62741]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:25:48 Cluster-Server-2 crm_attribute: [62741]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:25:48 Cluster-Server-2 crm_attribute: [62741]: info: determine_host: Mapped Cluster-Server-2 to Cluster-Server-2
Sep 10 15:25:48 Cluster-Server-2 crm_attribute: [62741]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:25:48 Cluster-Server-2 crm_attribute: [62741]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:25:48 Cluster-Server-2 crm_attribute: [62741]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:1=10000 for Cluster-Server-2
Sep 10 15:25:48 Cluster-Server-2 crm_attribute: [62741]: info: attrd_lazy_update: Updated master-p_Device_drive:1=10000 for Cluster-Server-2
Sep 10 15:25:48 Cluster-Server-2 crm_attribute: [62741]: info: main: Update master-p_Device_drive:1=10000 sent via attrd
Sep 10 15:25:48 Cluster-Server-2 crm_attribute: [62741]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:25:48 Cluster-Server-2 crm_attribute: [62741]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:25:48 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:1=10000
Sep 10 15:25:48 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:1)[62711]:	2012/09/10_15:25:48 DEBUG: drive: Exit code 0
drbd(p_Device_drive:1)[62711]:	2012/09/10_15:25:48 DEBUG: drive: Command output: 
Sep 10 15:25:48 Cluster-Server-2 lrmd: [40194]: debug: RA output: (p_Device_drive:1:monitor:stdout) 

Sep 10 15:25:48 Cluster-Server-2 lrmd: [40194]: info: operation monitor[11] on p_Device_drive:1 for client 40197: pid 62711 exited with return code 0
Sep 10 15:25:48 Cluster-Server-2 attrd_updater: [62750]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:25:48 Cluster-Server-2 attrd_updater: [62750]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:25:48 Cluster-Server-2 attrd_updater: [62750]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:25:48 Cluster-Server-2 attrd_updater: [62750]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:25:48 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:25:48 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:25:48 Cluster-Server-2 lrmd: [40194]: info: operation monitor[6] on p_PingD:1 for client 40197: pid 62555 exited with return code 0
Sep 10 15:25:54 Cluster-Server-1 lrmd: [48712]: debug: rsc:Target_iscsi3 monitor[31] (pid 56615)
SCSTTarget(Target_iscsi3)[56615]:	2012/09/10_15:25:54 DEBUG: Target_iscsi3 monitor : 0
Sep 10 15:25:54 Cluster-Server-1 lrmd: [48712]: info: operation monitor[31] on Target_iscsi3 for client 48715: pid 56615 exited with return code 0
Sep 10 15:25:54 Cluster-Server-1 lrmd: [48712]: debug: rsc:Lun_iscsi3 monitor[33] (pid 56621)
SCSTLun(Lun_iscsi3)[56621]:	2012/09/10_15:25:54 INFO: Lun_iscsi3 monitor : 0
SCSTLun(Lun_iscsi3)[56621]:	2012/09/10_15:25:54 INFO: Lun_iscsi3 monitor : 0
Sep 10 15:25:54 Cluster-Server-1 lrmd: [48712]: info: operation monitor[33] on Lun_iscsi3 for client 48715: pid 56621 exited with return code 0
Sep 10 15:25:55 Cluster-Server-1 lrmd: [48712]: debug: rsc:Target_iscsi2 monitor[21] (pid 56629)
Sep 10 15:25:55 Cluster-Server-1 lrmd: [48712]: debug: rsc:Lun_iscsi2 monitor[23] (pid 56630)
SCSTTarget(Target_iscsi2)[56629]:	2012/09/10_15:25:55 DEBUG: Target_iscsi2 monitor : 0
Sep 10 15:25:55 Cluster-Server-1 lrmd: [48712]: info: operation monitor[21] on Target_iscsi2 for client 48715: pid 56629 exited with return code 0
SCSTLun(Lun_iscsi2)[56630]:	2012/09/10_15:25:55 INFO: Lun_iscsi2 monitor : 0
SCSTLun(Lun_iscsi2)[56630]:	2012/09/10_15:25:55 INFO: Lun_iscsi2 monitor : 0
Sep 10 15:25:55 Cluster-Server-1 lrmd: [48712]: info: operation monitor[23] on Lun_iscsi2 for client 48715: pid 56630 exited with return code 0
Sep 10 15:25:58 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_PingD:0 monitor[6] (pid 56653)
Sep 10 15:25:58 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_Device_drive:0 monitor[11] (pid 56669)
drbd(p_Device_drive:0)[56669]:	2012/09/10_15:25:58 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:25:58 Cluster-Server-1 crm_attribute: [56699]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:25:58 Cluster-Server-1 crm_attribute: [56699]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:25:58 Cluster-Server-1 crm_attribute: [56699]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:25:58 Cluster-Server-1 crm_attribute: [56699]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:25:58 Cluster-Server-1 crm_attribute: [56699]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:25:58 Cluster-Server-1 crm_attribute: [56699]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:25:58 Cluster-Server-1 crm_attribute: [56699]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:25:58 Cluster-Server-1 crm_attribute: [56699]: info: determine_host: Mapped Cluster-Server-1 to Cluster-Server-1
Sep 10 15:25:58 Cluster-Server-1 crm_attribute: [56699]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:25:58 Cluster-Server-1 crm_attribute: [56699]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:25:58 Cluster-Server-1 crm_attribute: [56699]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:25:58 Cluster-Server-1 crm_attribute: [56699]: info: attrd_lazy_update: Updated master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:25:58 Cluster-Server-1 crm_attribute: [56699]: info: main: Update master-p_Device_drive:0=10000 sent via attrd
Sep 10 15:25:58 Cluster-Server-1 crm_attribute: [56699]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:25:58 Cluster-Server-1 crm_attribute: [56699]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:25:58 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:0=10000
Sep 10 15:25:58 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:0)[56669]:	2012/09/10_15:25:58 DEBUG: drive: Exit code 0
drbd(p_Device_drive:0)[56669]:	2012/09/10_15:25:58 DEBUG: drive: Command output: 
Sep 10 15:25:58 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_Device_drive:0:monitor:stdout) 

Sep 10 15:25:58 Cluster-Server-1 lrmd: [48712]: info: operation monitor[11] on p_Device_drive:0 for client 48715: pid 56669 exited with return code 8
Sep 10 15:25:58 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_PingD:1 monitor[6] (pid 63533)
Sep 10 15:26:00 Cluster-Server-1 attrd_updater: [56708]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:26:00 Cluster-Server-1 attrd_updater: [56708]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:26:00 Cluster-Server-1 attrd_updater: [56708]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:26:00 Cluster-Server-1 attrd_updater: [56708]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:26:00 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:26:00 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:26:00 Cluster-Server-1 lrmd: [48712]: info: operation monitor[6] on p_PingD:0 for client 48715: pid 56653 exited with return code 0
Sep 10 15:26:00 Cluster-Server-2 attrd_updater: [63641]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:26:00 Cluster-Server-2 attrd_updater: [63641]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:26:00 Cluster-Server-2 attrd_updater: [63641]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:26:00 Cluster-Server-2 attrd_updater: [63641]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:26:00 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:26:00 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:26:00 Cluster-Server-2 lrmd: [40194]: info: operation monitor[6] on p_PingD:1 for client 40197: pid 63533 exited with return code 0
Sep 10 15:26:04 Cluster-Server-1 lrmd: [48712]: debug: rsc:Target_iscsi3 monitor[31] (pid 56712)
SCSTTarget(Target_iscsi3)[56712]:	2012/09/10_15:26:04 DEBUG: Target_iscsi3 monitor : 0
Sep 10 15:26:04 Cluster-Server-1 lrmd: [48712]: info: operation monitor[31] on Target_iscsi3 for client 48715: pid 56712 exited with return code 0
Sep 10 15:26:04 Cluster-Server-1 lrmd: [48712]: debug: rsc:Lun_iscsi3 monitor[33] (pid 56718)
SCSTLun(Lun_iscsi3)[56718]:	2012/09/10_15:26:04 INFO: Lun_iscsi3 monitor : 0
SCSTLun(Lun_iscsi3)[56718]:	2012/09/10_15:26:04 INFO: Lun_iscsi3 monitor : 0
Sep 10 15:26:04 Cluster-Server-1 lrmd: [48712]: info: operation monitor[33] on Lun_iscsi3 for client 48715: pid 56718 exited with return code 0
Sep 10 15:26:05 Cluster-Server-1 lrmd: [48712]: debug: rsc:Target_iscsi2 monitor[21] (pid 56726)
Sep 10 15:26:05 Cluster-Server-1 lrmd: [48712]: debug: rsc:Lun_iscsi2 monitor[23] (pid 56727)
SCSTTarget(Target_iscsi2)[56726]:	2012/09/10_15:26:05 DEBUG: Target_iscsi2 monitor : 0
Sep 10 15:26:05 Cluster-Server-1 lrmd: [48712]: info: operation monitor[21] on Target_iscsi2 for client 48715: pid 56726 exited with return code 0
SCSTLun(Lun_iscsi2)[56727]:	2012/09/10_15:26:05 INFO: Lun_iscsi2 monitor : 0
SCSTLun(Lun_iscsi2)[56727]:	2012/09/10_15:26:05 INFO: Lun_iscsi2 monitor : 0
Sep 10 15:26:05 Cluster-Server-1 lrmd: [48712]: info: operation monitor[23] on Lun_iscsi2 for client 48715: pid 56727 exited with return code 0
Sep 10 15:26:08 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_Device_drive:0 monitor[11] (pid 56797)
drbd(p_Device_drive:0)[56797]:	2012/09/10_15:26:09 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:26:08 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_Device_drive:1 monitor[11] (pid 64308)
drbd(p_Device_drive:1)[64308]:	2012/09/10_15:26:08 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:26:08 Cluster-Server-2 crm_attribute: [64338]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:26:08 Cluster-Server-2 crm_attribute: [64338]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:26:08 Cluster-Server-2 crm_attribute: [64338]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:26:08 Cluster-Server-2 crm_attribute: [64338]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:26:08 Cluster-Server-2 crm_attribute: [64338]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:26:08 Cluster-Server-2 crm_attribute: [64338]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:26:08 Cluster-Server-2 crm_attribute: [64338]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:26:08 Cluster-Server-2 crm_attribute: [64338]: info: determine_host: Mapped Cluster-Server-2 to Cluster-Server-2
Sep 10 15:26:08 Cluster-Server-2 crm_attribute: [64338]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:26:08 Cluster-Server-2 crm_attribute: [64338]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:26:08 Cluster-Server-2 crm_attribute: [64338]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:1=10000 for Cluster-Server-2
Sep 10 15:26:08 Cluster-Server-2 crm_attribute: [64338]: info: attrd_lazy_update: Updated master-p_Device_drive:1=10000 for Cluster-Server-2
Sep 10 15:26:08 Cluster-Server-2 crm_attribute: [64338]: info: main: Update master-p_Device_drive:1=10000 sent via attrd
Sep 10 15:26:08 Cluster-Server-2 crm_attribute: [64338]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:26:08 Cluster-Server-2 crm_attribute: [64338]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:26:08 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:1=10000
Sep 10 15:26:08 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:1)[64308]:	2012/09/10_15:26:08 DEBUG: drive: Exit code 0
drbd(p_Device_drive:1)[64308]:	2012/09/10_15:26:08 DEBUG: drive: Command output: 
Sep 10 15:26:08 Cluster-Server-2 lrmd: [40194]: debug: RA output: (p_Device_drive:1:monitor:stdout) 

Sep 10 15:26:08 Cluster-Server-2 lrmd: [40194]: info: operation monitor[11] on p_Device_drive:1 for client 40197: pid 64308 exited with return code 0
Sep 10 15:26:09 Cluster-Server-1 crm_attribute: [56827]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:26:09 Cluster-Server-1 crm_attribute: [56827]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:26:09 Cluster-Server-1 crm_attribute: [56827]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:26:09 Cluster-Server-1 crm_attribute: [56827]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:26:09 Cluster-Server-1 crm_attribute: [56827]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:26:09 Cluster-Server-1 crm_attribute: [56827]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:26:09 Cluster-Server-1 crm_attribute: [56827]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:26:09 Cluster-Server-1 crm_attribute: [56827]: info: determine_host: Mapped Cluster-Server-1 to Cluster-Server-1
Sep 10 15:26:09 Cluster-Server-1 crm_attribute: [56827]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:26:09 Cluster-Server-1 crm_attribute: [56827]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:26:09 Cluster-Server-1 crm_attribute: [56827]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:26:09 Cluster-Server-1 crm_attribute: [56827]: info: attrd_lazy_update: Updated master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:26:09 Cluster-Server-1 crm_attribute: [56827]: info: main: Update master-p_Device_drive:0=10000 sent via attrd
Sep 10 15:26:09 Cluster-Server-1 crm_attribute: [56827]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:26:09 Cluster-Server-1 crm_attribute: [56827]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:26:09 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:0=10000
Sep 10 15:26:09 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:0)[56797]:	2012/09/10_15:26:09 DEBUG: drive: Exit code 0
drbd(p_Device_drive:0)[56797]:	2012/09/10_15:26:09 DEBUG: drive: Command output: 
Sep 10 15:26:09 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_Device_drive:0:monitor:stdout) 

Sep 10 15:26:09 Cluster-Server-1 lrmd: [48712]: info: operation monitor[11] on p_Device_drive:0 for client 48715: pid 56797 exited with return code 8
Sep 10 15:26:10 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_PingD:0 monitor[6] (pid 56834)
Sep 10 15:26:10 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_PingD:1 monitor[6] (pid 64378)
Sep 10 15:26:12 Cluster-Server-1 attrd_updater: [56995]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:26:12 Cluster-Server-1 attrd_updater: [56995]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:26:12 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:26:12 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:26:12 Cluster-Server-1 attrd_updater: [56995]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:26:12 Cluster-Server-1 attrd_updater: [56995]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:26:12 Cluster-Server-1 lrmd: [48712]: info: operation monitor[6] on p_PingD:0 for client 48715: pid 56834 exited with return code 0
Sep 10 15:26:12 Cluster-Server-2 attrd_updater: [64678]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:26:12 Cluster-Server-2 attrd_updater: [64678]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:26:12 Cluster-Server-2 attrd_updater: [64678]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:26:12 Cluster-Server-2 attrd_updater: [64678]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:26:12 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:26:12 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:26:12 Cluster-Server-2 lrmd: [40194]: info: operation monitor[6] on p_PingD:1 for client 40197: pid 64378 exited with return code 0
Sep 10 15:26:14 Cluster-Server-1 lrmd: [48712]: debug: rsc:Target_iscsi3 monitor[31] (pid 57084)
SCSTTarget(Target_iscsi3)[57084]:	2012/09/10_15:26:14 DEBUG: Target_iscsi3 monitor : 0
Sep 10 15:26:14 Cluster-Server-1 lrmd: [48712]: info: operation monitor[31] on Target_iscsi3 for client 48715: pid 57084 exited with return code 0
Sep 10 15:26:14 Cluster-Server-1 lrmd: [48712]: debug: rsc:Lun_iscsi3 monitor[33] (pid 57090)
SCSTLun(Lun_iscsi3)[57090]:	2012/09/10_15:26:14 INFO: Lun_iscsi3 monitor : 0
SCSTLun(Lun_iscsi3)[57090]:	2012/09/10_15:26:14 INFO: Lun_iscsi3 monitor : 0
Sep 10 15:26:14 Cluster-Server-1 lrmd: [48712]: info: operation monitor[33] on Lun_iscsi3 for client 48715: pid 57090 exited with return code 0
Sep 10 15:26:15 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_NFS_Server:0 monitor[8] (pid 57103)
Sep 10 15:26:15 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_iSCSI_Daemon:0 monitor[5] (pid 57104)
Sep 10 15:26:15 Cluster-Server-1 lrmd: [48712]: info: operation monitor[8] on p_NFS_Server:0 for client 48715: pid 57103 exited with return code 0
Sep 10 15:26:15 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_NFS_Server:0:monitor:stdout) nfsd running

Sep 10 15:26:15 Cluster-Server-1 lrmd: [48712]: info: operation monitor[5] on p_iSCSI_Daemon:0 for client 48715: pid 57104 exited with return code 0
Sep 10 15:26:15 Cluster-Server-1 lrmd: [48712]: debug: rsc:Target_iscsi2 monitor[21] (pid 57109)
Sep 10 15:26:15 Cluster-Server-1 lrmd: [48712]: debug: rsc:Lun_iscsi2 monitor[23] (pid 57110)
SCSTTarget(Target_iscsi2)[57109]:	2012/09/10_15:26:15 DEBUG: Target_iscsi2 monitor : 0
Sep 10 15:26:15 Cluster-Server-1 lrmd: [48712]: info: operation monitor[21] on Target_iscsi2 for client 48715: pid 57109 exited with return code 0
SCSTLun(Lun_iscsi2)[57110]:	2012/09/10_15:26:15 INFO: Lun_iscsi2 monitor : 0
SCSTLun(Lun_iscsi2)[57110]:	2012/09/10_15:26:15 INFO: Lun_iscsi2 monitor : 0
Sep 10 15:26:15 Cluster-Server-1 lrmd: [48712]: info: operation monitor[23] on Lun_iscsi2 for client 48715: pid 57110 exited with return code 0
Sep 10 15:26:15 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_NFS_Server:1 monitor[8] (pid 65143)
Sep 10 15:26:15 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_iSCSI_Daemon:1 monitor[5] (pid 65144)
Sep 10 15:26:15 Cluster-Server-2 lrmd: [40194]: info: operation monitor[8] on p_NFS_Server:1 for client 40197: pid 65143 exited with return code 0
Sep 10 15:26:15 Cluster-Server-2 lrmd: [40194]: debug: RA output: (p_NFS_Server:1:monitor:stdout) nfsd running

Sep 10 15:26:15 Cluster-Server-2 lrmd: [40194]: info: operation monitor[5] on p_iSCSI_Daemon:1 for client 40197: pid 65144 exited with return code 0
Sep 10 15:26:19 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_Device_drive:0 monitor[11] (pid 57217)
drbd(p_Device_drive:0)[57217]:	2012/09/10_15:26:19 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:26:19 Cluster-Server-1 crm_attribute: [57247]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:26:19 Cluster-Server-1 crm_attribute: [57247]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:26:19 Cluster-Server-1 crm_attribute: [57247]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:26:19 Cluster-Server-1 crm_attribute: [57247]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:26:19 Cluster-Server-1 crm_attribute: [57247]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:26:19 Cluster-Server-1 crm_attribute: [57247]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:26:19 Cluster-Server-1 crm_attribute: [57247]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:26:19 Cluster-Server-1 crm_attribute: [57247]: info: determine_host: Mapped Cluster-Server-1 to Cluster-Server-1
Sep 10 15:26:19 Cluster-Server-1 crm_attribute: [57247]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:26:19 Cluster-Server-1 crm_attribute: [57247]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:26:19 Cluster-Server-1 crm_attribute: [57247]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:26:19 Cluster-Server-1 crm_attribute: [57247]: info: attrd_lazy_update: Updated master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:26:19 Cluster-Server-1 crm_attribute: [57247]: info: main: Update master-p_Device_drive:0=10000 sent via attrd
Sep 10 15:26:19 Cluster-Server-1 crm_attribute: [57247]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:26:19 Cluster-Server-1 crm_attribute: [57247]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:26:19 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:0=10000
Sep 10 15:26:19 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:0)[57217]:	2012/09/10_15:26:19 DEBUG: drive: Exit code 0
drbd(p_Device_drive:0)[57217]:	2012/09/10_15:26:19 DEBUG: drive: Command output: 
Sep 10 15:26:19 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_Device_drive:0:monitor:stdout) 

Sep 10 15:26:19 Cluster-Server-1 lrmd: [48712]: info: operation monitor[11] on p_Device_drive:0 for client 48715: pid 57217 exited with return code 8
Sep 10 15:26:22 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_PingD:0 monitor[6] (pid 57297)
Sep 10 15:26:22 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_PingD:1 monitor[6] (pid 304)
Sep 10 15:26:24 Cluster-Server-1 attrd_updater: [57336]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:26:24 Cluster-Server-1 attrd_updater: [57336]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:26:24 Cluster-Server-1 attrd_updater: [57336]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:26:24 Cluster-Server-1 attrd_updater: [57336]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:26:24 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:26:24 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:26:24 Cluster-Server-1 lrmd: [48712]: info: operation monitor[6] on p_PingD:0 for client 48715: pid 57297 exited with return code 0
Sep 10 15:26:24 Cluster-Server-1 lrmd: [48712]: debug: rsc:Target_iscsi3 monitor[31] (pid 57348)
SCSTTarget(Target_iscsi3)[57348]:	2012/09/10_15:26:24 DEBUG: Target_iscsi3 monitor : 0
Sep 10 15:26:24 Cluster-Server-1 lrmd: [48712]: info: operation monitor[31] on Target_iscsi3 for client 48715: pid 57348 exited with return code 0
Sep 10 15:26:24 Cluster-Server-1 lrmd: [48712]: debug: rsc:Lun_iscsi3 monitor[33] (pid 57354)
SCSTLun(Lun_iscsi3)[57354]:	2012/09/10_15:26:24 INFO: Lun_iscsi3 monitor : 0
SCSTLun(Lun_iscsi3)[57354]:	2012/09/10_15:26:24 INFO: Lun_iscsi3 monitor : 0
Sep 10 15:26:24 Cluster-Server-1 lrmd: [48712]: info: operation monitor[33] on Lun_iscsi3 for client 48715: pid 57354 exited with return code 0
Sep 10 15:26:24 Cluster-Server-2 attrd_updater: [490]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:26:24 Cluster-Server-2 attrd_updater: [490]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:26:24 Cluster-Server-2 attrd_updater: [490]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:26:24 Cluster-Server-2 attrd_updater: [490]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:26:24 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:26:24 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:26:24 Cluster-Server-2 lrmd: [40194]: info: operation monitor[6] on p_PingD:1 for client 40197: pid 304 exited with return code 0
Sep 10 15:26:25 Cluster-Server-1 lrmd: [48712]: debug: rsc:Target_iscsi2 monitor[21] (pid 57383)
Sep 10 15:26:25 Cluster-Server-1 lrmd: [48712]: debug: rsc:Lun_iscsi2 monitor[23] (pid 57384)
SCSTTarget(Target_iscsi2)[57383]:	2012/09/10_15:26:25 DEBUG: Target_iscsi2 monitor : 0
SCSTLun(Lun_iscsi2)[57384]:	2012/09/10_15:26:25 INFO: Lun_iscsi2 monitor : 0
Sep 10 15:26:25 Cluster-Server-1 lrmd: [48712]: info: operation monitor[21] on Target_iscsi2 for client 48715: pid 57383 exited with return code 0
SCSTLun(Lun_iscsi2)[57384]:	2012/09/10_15:26:25 INFO: Lun_iscsi2 monitor : 0
Sep 10 15:26:25 Cluster-Server-1 lrmd: [48712]: info: operation monitor[23] on Lun_iscsi2 for client 48715: pid 57384 exited with return code 0
Sep 10 15:26:27 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [57459] registered
Sep 10 15:26:27 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:57459] disconnected.
Sep 10 15:26:27 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:57459] is unregistered
Sep 10 15:26:27 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [57461] registered
Sep 10 15:26:27 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:57461] disconnected.
Sep 10 15:26:27 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:57461] is unregistered
Sep 10 15:26:27 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [57463] registered
Sep 10 15:26:27 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:57463] disconnected.
Sep 10 15:26:27 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:57463] is unregistered
Sep 10 15:26:27 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [57465] registered
Sep 10 15:26:27 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:57465] disconnected.
Sep 10 15:26:27 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:57465] is unregistered
Sep 10 15:26:27 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [57474] registered
Sep 10 15:26:27 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:57474] disconnected.
Sep 10 15:26:27 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:57474] is unregistered
Sep 10 15:26:27 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [57483] registered
Sep 10 15:26:27 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:57483] disconnected.
Sep 10 15:26:27 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:57483] is unregistered
Sep 10 15:26:27 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [57490] registered
Sep 10 15:26:27 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:57490] disconnected.
Sep 10 15:26:27 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:57490] is unregistered
Sep 10 15:26:27 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [57497] registered
Sep 10 15:26:27 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:57497] disconnected.
Sep 10 15:26:27 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:57497] is unregistered
Sep 10 15:26:28 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [57504] registered
Sep 10 15:26:28 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:57504] disconnected.
Sep 10 15:26:28 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:57504] is unregistered
Sep 10 15:26:28 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [57512] registered
Sep 10 15:26:28 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:57512] disconnected.
Sep 10 15:26:28 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:57512] is unregistered
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: crm_uptime: Current CPU usage is: 0s, 80000us
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: crm_compare_age: Loose: 80000 vs 390000 (usec)
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: info: do_election_count_vote: Election 11 (owner: Cluster-Server-2) lost: vote from Cluster-Server-2 (Uptime)
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: update_dc: Unset DC. Was Cluster-Server-2
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: do_election_check: Ignore election check: we not in an election
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_PENDING: [ state=S_NOT_DC cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: do_log: FSA: Input I_PENDING from do_election_count_vote() received in state S_NOT_DC
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: notice: do_state_transition: State transition S_NOT_DC -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_START
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: crm_timer_start: Started Election Trigger (I_DC_TIMEOUT:20000ms), src=68
Sep 10 15:26:28 Cluster-Server-1 cib: [48709]: debug: activateCibXml: Triggering CIB write for cib_apply_diff op
Sep 10 15:26:28 Cluster-Server-1 cib: [48709]: debug: Forking temp process write_cib_contents
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: handle_request: Raising I_JOIN_OFFER: join-9
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_JOIN_OFFER: [ state=S_PENDING cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_REQUEST
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: info: update_dc: Set DC to Cluster-Server-2 (3.0.6)
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: do_cl_join_offer_respond: do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:26:28 Cluster-Server-1 cib: [48709]: info: Managed write_cib_contents process 57523 exited with return code 0.
Sep 10 15:26:28 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='master-p_Device_drive:1'] does not exist
Sep 10 15:26:28 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update -22 for master-p_Device_drive:1=(null) passed
Sep 10 15:26:28 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:26:28 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-pingd" name="pingd" value="100" />
Sep 10 15:26:28 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:26:28 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-probe_complete" name="probe_complete" value="true" />
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: join_query_callback: Respond to join offer join-9
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: join_query_callback: Acknowledging Cluster-Server-2 as our DC
Sep 10 15:26:28 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 125 for pingd=100 passed
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: handle_request: Raising I_JOIN_RESULT: join-9
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_PENDING cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: do_cl_join_finalize_respond: Confirming join join-9: join_ack_nack
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_NFS_Server:0 after complete start op (interval=0)
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_NFS_Server:0 after complete monitor op (interval=30000)
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce LVM_drive after complete monitor op (interval=0)
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Lun_iscsi1 after complete monitor op (interval=0)
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_PingD:0 after complete monitor op (interval=0)
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_PingD:0 after complete monitor op (interval=10000)
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Lun_iscsi2 after complete start op (interval=0)
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Lun_iscsi2 after complete monitor op (interval=10000)
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_Device_drive:0 after complete monitor op (interval=0)
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_Device_drive:0 after complete monitor op (interval=10000)
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_iSCSI_Daemon:0 after complete monitor op (interval=0)
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_iSCSI_Daemon:0 after complete monitor op (interval=30000)
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Target_iscsi2 after complete start op (interval=0)
Sep 10 15:26:28 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 127 for probe_complete=true passed
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Target_iscsi2 after complete monitor op (interval=10000)
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Target_iscsi3 after complete start op (interval=0)
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Target_iscsi3 after complete monitor op (interval=10000)
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Target_iscsi1 after complete monitor op (interval=0)
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Lun_iscsi3 after complete start op (interval=0)
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Lun_iscsi3 after complete monitor op (interval=10000)
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: do_cl_join_finalize_respond: join-9: Join complete.  Sending local LRM status to Cluster-Server-2
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: attrd_update_delegate: Sent update: (null)=(null) for localhost
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_NOT_DC: [ state=S_PENDING cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: do_log: FSA: Input I_NOT_DC from do_cl_join_finalize_respond() received in state S_PENDING
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: notice: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:26:28 Cluster-Server-1 attrd: [48713]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
Sep 10 15:26:28 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_Device_drive:0 (10000)
Sep 10 15:26:28 Cluster-Server-1 cib: [48709]: debug: activateCibXml: Triggering CIB write for cib_replace op
Sep 10 15:26:28 Cluster-Server-1 lrmd: [48712]: debug: on_msg_add_rsc:client [48715] adds resource FS_nfs1
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=16:15:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=FS_nfs1_monitor_0
Sep 10 15:26:28 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op:2399: copying parameters for rsc FS_nfs1
Sep 10 15:26:28 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation monitor[34] on FS_nfs1 for client 48715, its parameters: crm_feature_set=[3.0.6] device=[/dev/drive-CSD/nfs1_NFS] directory=[/volumes/nfs1] force_clones=[false] fstype=[xfs] CRM_meta_timeout=[20000]  to the operation list.
Sep 10 15:26:28 Cluster-Server-1 lrmd: [48712]: info: rsc:FS_nfs1 probe[34] (pid 57536)
Sep 10 15:26:28 Cluster-Server-1 lrmd: [48712]: debug: on_msg_add_rsc:client [48715] adds resource ExportFS_nfs1
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=17:15:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=ExportFS_nfs1_monitor_0
Sep 10 15:26:28 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op:2399: copying parameters for rsc ExportFS_nfs1
Sep 10 15:26:28 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation monitor[35] on ExportFS_nfs1 for client 48715, its parameters: crm_feature_set=[3.0.6] options=[rw,insecure,async,no_subtree_check,root_squash,no_all_squash,anonuid=1directory=[/volumes/nfs1] fsid=[1955f364-fb4b-11e1-b02e-000c290247c7] clientspec=[*] CRM_meta_timeout=[20000]  to the operation list.
Sep 10 15:26:28 Cluster-Server-1 lrmd: [48712]: info: rsc:ExportFS_nfs1 probe[35] (pid 57538)
Sep 10 15:26:28 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='master-p_Device_drive:0'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[3])
Sep 10 15:26:28 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-master-p_Device_drive.0" name="master-p_Device_drive:0" value="10000" />
Sep 10 15:26:28 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: pingd (100)
Sep 10 15:26:28 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Sep 10 15:26:28 Cluster-Server-1 cib: [48709]: debug: Forking temp process write_cib_contents
Sep 10 15:26:28 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:26:28 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-probe_complete" name="probe_complete" value="true" />
Sep 10 15:26:28 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='master-p_Device_drive:1'] does not exist
Sep 10 15:26:28 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update -22 for master-p_Device_drive:1=(null) passed
Sep 10 15:26:28 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:26:28 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-pingd" name="pingd" value="100" />
Sep 10 15:26:28 Cluster-Server-1 cib: [48709]: info: Managed write_cib_contents process 57539 exited with return code 0.
Sep 10 15:26:28 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:26:28 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-probe_complete" name="probe_complete" value="true" />
Sep 10 15:26:28 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:26:28 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-pingd" name="pingd" value="100" />
Sep 10 15:26:28 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 129 for master-p_Device_drive:0=10000 passed
Sep 10 15:26:28 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 131 for probe_complete=true passed
exportfs(ExportFS_nfs1)[57538]:	2012/09/10_15:26:28 INFO: Directory /volumes/nfs1 is not exported to * (stopped).
exportfs(ExportFS_nfs1)[57538]:	2012/09/10_15:26:28 INFO: Directory /volumes/nfs1 is not exported to * (stopped).
Sep 10 15:26:28 Cluster-Server-1 lrmd: [48712]: WARN: Managed ExportFS_nfs1:monitor process 57538 exited with return code 7.
Sep 10 15:26:28 Cluster-Server-1 lrmd: [48712]: info: operation monitor[35] on ExportFS_nfs1 for client 48715: pid 57538 exited with return code 7
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce ExportFS_nfs1 after complete monitor op (interval=0)
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation ExportFS_nfs1_monitor_0 (call=35, rc=7, cib-update=74, confirmed=true) not running
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending monitor op to history for 'ExportFS_nfs1'
Sep 10 15:26:28 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 134 for pingd=100 passed
Sep 10 15:26:28 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 136 for probe_complete=true passed
Sep 10 15:26:28 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 138 for pingd=100 passed
Sep 10 15:26:28 Cluster-Server-1 lrmd: [48712]: WARN: Managed FS_nfs1:monitor process 57536 exited with return code 7.
Sep 10 15:26:28 Cluster-Server-1 lrmd: [48712]: info: operation monitor[34] on FS_nfs1 for client 48715: pid 57536 exited with return code 7
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce FS_nfs1 after complete monitor op (interval=0)
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation FS_nfs1_monitor_0 (call=34, rc=7, cib-update=75, confirmed=true) not running
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending monitor op to history for 'FS_nfs1'
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: attrd_update_delegate: Sent update: probe_complete=true for localhost
Sep 10 15:26:28 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crmd: probe_complete=true
Sep 10 15:26:28 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: true, Current: true, Stored: true
Sep 10 15:26:28 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=90:16:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=FS_nfs1_start_0
Sep 10 15:26:28 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op:2399: copying parameters for rsc FS_nfs1
Sep 10 15:26:28 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation start[36] on FS_nfs1 for client 48715, its parameters: crm_feature_set=[3.0.6] device=[/dev/drive-CSD/nfs1_NFS] directory=[/volumes/nfs1] CRM_meta_name=[start] force_clones=[false] CRM_meta_timeout=[60000] fstype=[xfs]  to the operation list.
Sep 10 15:26:28 Cluster-Server-1 lrmd: [48712]: info: rsc:FS_nfs1 start[36] (pid 57591)
Filesystem(FS_nfs1)[57591]:	2012/09/10_15:26:28 INFO: Running start for /dev/drive-CSD/nfs1_NFS on /volumes/nfs1
Filesystem(FS_nfs1)[57591]:	2012/09/10_15:26:28 INFO: Running start for /dev/drive-CSD/nfs1_NFS on /volumes/nfs1
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: debug: activateCibXml: Triggering CIB write for cib_replace op
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_replace): 0.13.39 -> 0.14.1 (S_IDLE)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.14.1) : Non-status change
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause <diff crm_feature_set="3.0.6" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   <diff-removed admin_epoch="0" epoch="13" num_updates="39" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     <cib admin_epoch="0" epoch="13" num_updates="39" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   </diff-removed>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   <diff-added >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     <cib epoch="14" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="Cluster-Server-1" update-client="cibadmin" cib-last-written="Mon Sep 10 15:25:14 2012" have-quorum="1" dc-uuid="Cluster-Server-2" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       <configuration >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         <resources >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <group id="NFS_nfs1" __crm_diff_marker__="added:top" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <primitive class="ocf" id="FS_nfs1" provider="nas" type="Filesystem" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <instance_attributes id="FS_nfs1-instance_attributes" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="FS_nfs1-instance_attributes-device" name="device" value="/dev/drive-CSD/nfs1_NFS" />
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib_replace_notify: Replaced: 0.13.39 -> 0.14.1 from Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="FS_nfs1-instance_attributes-directory" name="directory" value="/volumes/nfs1" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="FS_nfs1-instance_attributes-fstype" name="fstype" value="xfs" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="FS_nfs1-instance_attributes-force_clones" name="force_clones" value="false" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </instance_attributes>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <operations >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="FS_nfs1-start-0" interval="0" name="start" timeout="60" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="FS_nfs1-stop-0" interval="0" name="stop" timeout="60" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="FS_nfs1-monitor-20" interval="20" name="monitor" timeout="40" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </operations>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <meta_attributes id="FS_nfs1-meta_attributes" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="FS_nfs1-meta_attributes-resource-stickiness" name="resource-stickiness" value="0" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="FS_nfs1-meta_attributes-target-role" name="target-role" value="Started" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </meta_attributes>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             </primitive>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <primitive class="ocf" id="ExportFS_nfs1" provider="nas" type="exportfs" >
Sep 10 15:26:28 Cluster-Server-2 attrd: [40195]: info: do_cib_replaced: Sending full refresh
Sep 10 15:26:28 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_Device_drive:1 (10000)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <instance_attributes id="ExportFS_nfs1-instance_attributes" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="ExportFS_nfs1-instance_attributes-fsid" name="fsid" value="1955f364-fb4b-11e1-b02e-000c290247c7" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="ExportFS_nfs1-instance_attributes-directory" name="directory" value="/volumes/nfs1" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="ExportFS_nfs1-instance_attributes-options" name="options" value="rw,insecure,async,no_subtree_check,root_squash,no_all_squash,anonuid=1000,anongid=100" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="ExportFS_nfs1-instance_attributes-clientspec" name="clientspec" value="*" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </instance_attributes>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <operations >
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: - <cib admin_epoch="0" epoch="13" num_updates="39" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="ExportFS_nfs1-start-0" interval="0" name="start" timeout="40" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="ExportFS_nfs1-stop-0" interval="0" name="stop" timeout="10" />
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: + <cib epoch="14" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="Cluster-Server-1" update-client="cibadmin" cib-last-written="Mon Sep 10 15:25:14 2012" have-quorum="1" dc-uuid="Cluster-Server-2" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="ExportFS_nfs1-monitor-10" interval="10" name="monitor" timeout="20" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </operations>
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +   <configuration >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <meta_attributes id="ExportFS_nfs1-meta_attributes" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="ExportFS_nfs1-meta_attributes-resource-stickiness" name="resource-stickiness" value="0" />
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +     <resources >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="ExportFS_nfs1-meta_attributes-target-role" name="target-role" value="Started" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </meta_attributes>
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <group id="NFS_nfs1" __crm_diff_marker__="added:top" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             </primitive>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           </group>
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +         <primitive class="ocf" id="FS_nfs1" provider="nas" type="Filesystem" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         </resources>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         <constraints >
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <instance_attributes id="FS_nfs1-instance_attributes" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <rsc_order first="LVM_drive" id="NFS_nfs1_after_LVM_drive" score="INFINITY" then="NFS_nfs1" __crm_diff_marker__="added:top" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <rsc_order first="NFS_Server" id="NFS_nfs1_after_NFS_Server" score="INFINITY" then="NFS_nfs1" __crm_diff_marker__="added:top" />
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="FS_nfs1-instance_attributes-device" name="device" value="/dev/drive-CSD/nfs1_NFS" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <rsc_colocation id="NFS_nfs1_with_LVM_drive" rsc="NFS_nfs1" score="INFINITY" with-rsc="LVM_drive" __crm_diff_marker__="added:top" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <rsc_colocation id="NFS_nfs1_with_NFS_Server" rsc="NFS_nfs1" score="INFINITY" with-rsc="NFS_Server" __crm_diff_marker__="added:top" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         </constraints>
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="FS_nfs1-instance_attributes-directory" name="directory" value="/volumes/nfs1" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       </configuration>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     </cib>
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="FS_nfs1-instance_attributes-fstype" name="fstype" value="xfs" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   </diff-added>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause </diff>
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="FS_nfs1-instance_attributes-force_clones" name="force_clones" value="false" />
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +           </instance_attributes>
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <operations >
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <op id="FS_nfs1-start-0" interval="0" name="start" timeout="60" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_cib_replaced: Updating the CIB after a replace: DC=true
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <op id="FS_nfs1-stop-0" interval="0" name="stop" timeout="60" />
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <op id="FS_nfs1-monitor-20" interval="20" name="monitor" timeout="40" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-1: true (overwrite=true) hash_size=2
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-2: true (overwrite=true) hash_size=2
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +           </operations>
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <meta_attributes id="FS_nfs1-meta_attributes" >
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="FS_nfs1-meta_attributes-resource-stickiness" name="resource-stickiness" value="0" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_IDLE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="FS_nfs1-meta_attributes-target-role" name="target-role" value="Started" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +           </meta_attributes>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes are eligible to run resources.
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +         </primitive>
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +         <primitive class="ocf" id="ExportFS_nfs1" provider="nas" type="exportfs" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <instance_attributes id="ExportFS_nfs1-instance_attributes" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="ExportFS_nfs1-instance_attributes-fsid" name="fsid" value="1955f364-fb4b-11e1-b02e-000c290247c7" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 233: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="ExportFS_nfs1-instance_attributes-directory" name="directory" value="/volumes/nfs1" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="ExportFS_nfs1-instance_attributes-options" name="options" value="rw,insecure,async,no_subtree_check,root_squash,no_all_squash,anonuid=1000,anongid=100" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: update_dc: Unset DC. Was Cluster-Server-2
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="ExportFS_nfs1-instance_attributes-clientspec" name="clientspec" value="*" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +           </instance_attributes>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <operations >
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <op id="ExportFS_nfs1-start-0" interval="0" name="start" timeout="40" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: crm_uptime: Current CPU usage is: 0s, 390000us
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_election_vote: Started election 11
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <op id="ExportFS_nfs1-stop-0" interval="0" name="stop" timeout="10" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started Election Timeout (I_ELECTION_DC:120000ms), src=274
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <op id="ExportFS_nfs1-monitor-10" interval="10" name="monitor" timeout="20" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_election_count_vote: Created voted hash
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: crm_uptime: Current CPU usage is: 0s, 390000us
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_election_count_vote: Election 11 (current: 11, owner: Cluster-Server-2): Processed vote from Cluster-Server-2 (Recorded)
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +           </operations>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <meta_attributes id="ExportFS_nfs1-meta_attributes" >
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="ExportFS_nfs1-meta_attributes-resource-stickiness" name="resource-stickiness" value="0" />
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="ExportFS_nfs1-meta_attributes-target-role" name="target-role" value="Started" />
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +           </meta_attributes>
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +         </primitive>
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +       </group>
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +     </resources>
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +     <constraints >
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <rsc_order first="LVM_drive" id="NFS_nfs1_after_LVM_drive" score="INFINITY" then="NFS_nfs1" __crm_diff_marker__="added:top" />
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <rsc_order first="NFS_Server" id="NFS_nfs1_after_NFS_Server" score="INFINITY" then="NFS_nfs1" __crm_diff_marker__="added:top" />
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <rsc_colocation id="NFS_nfs1_with_LVM_drive" rsc="NFS_nfs1" score="INFINITY" with-rsc="LVM_drive" __crm_diff_marker__="added:top" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <rsc_colocation id="NFS_nfs1_with_NFS_Server" rsc="NFS_nfs1" score="INFINITY" with-rsc="NFS_Server" __crm_diff_marker__="added:top" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: crm_uptime: Current CPU usage is: 0s, 390000us
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +     </constraints>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: crm_compare_age: Win: 390000 vs 0  (usec)
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: +   </configuration>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_election_count_vote: Election 11 (current: 11, owner: Cluster-Server-2): Processed no-vote from Cluster-Server-1 (Recorded)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_election_check: Destroying voted hash
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_ELECTION_DC: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_election_check ]
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib:diff: + </cib>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_ELECTION_DC from do_election_check() received in state S_ELECTION
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=Cluster-Server-1/cibadmin/2, version=0.14.1): ok (rc=0)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_START
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_te_control: The transitioner is already active
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_START
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pengine
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_START
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started Integration Timer (I_INTEGRATED:180000ms), src=276
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TAKEOVER
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: info: do_dc_takeover: Taking over DC status for this partition
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/231, version=0.14.2): ok (rc=0)
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: debug: cib_process_readwrite: We are still in R/W mode
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/234, version=0.14.4): ok (rc=0)
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/235, version=0.14.5): ok (rc=0)
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='master-p_Device_drive:1'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[3])
Sep 10 15:26:28 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-master-p_Device_drive.1" name="master-p_Device_drive:1" value="10000" />
Sep 10 15:26:28 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: pingd (100)
Sep 10 15:26:28 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:26:28 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-probe_complete" name="probe_complete" value="true" />
Sep 10 15:26:28 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 124 for master-p_Device_drive:1=10000 passed
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:26:28 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-pingd" name="pingd" value="100" />
Sep 10 15:26:28 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 126 for probe_complete=true passed
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//cluster_property_set//nvpair[@name='dc-version'] (/cib/configuration/crm_config/cluster_property_set/nvpair[1])
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: find_nvpair_attr_delegate: Match <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff" />
Sep 10 15:26:28 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 128 for pingd=100 passed
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/237, version=0.14.9): ok (rc=0)
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//cluster_property_set//nvpair[@name='cluster-infrastructure'] (/cib/configuration/crm_config/cluster_property_set/nvpair[2])
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: find_nvpair_attr_delegate: Match <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="cman" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_OFFER_ALL
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: initialize_join: join-9: Initializing join data (flag=true)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: join_make_offer: join-9: Sending offer to Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: join_make_offer: join-9: Sending offer to Cluster-Server-2
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: debug: Forking temp process write_cib_contents
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: info: do_dc_join_offer_all: join-9: Waiting on 2 outstanding join acks
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke_callback: Discarding PE request in state: S_INTEGRATION
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: handle_request: Raising I_JOIN_OFFER: join-9
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_OFFER: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_REQUEST
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: info: update_dc: Set DC to Cluster-Server-2 (3.0.6)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_cl_join_offer_respond: do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/239, version=0.14.10): ok (rc=0)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Call 240 : Parsing CIB options
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Shutdown escalation occurs after: 1200000ms
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Checking for expired actions every 900000ms
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: join_query_callback: Respond to join offer join-9
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: join_query_callback: Acknowledging Cluster-Server-2 as our DC
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_REQ
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: Processing req from Cluster-Server-2
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: join-9: Welcoming node Cluster-Server-2 (ref join_request-crmd-1347283588-143)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: 1 nodes have been integrated into join-9
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: join-9: Still waiting on 1 outstanding offers
Sep 10 15:26:28 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_Device_drive:1 monitor[11] (pid 837)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_REQ
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: Processing req from Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: join-9: Welcoming node Cluster-Server-1 (ref join_request-crmd-1347283588-31)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: 2 nodes have been integrated into join-9
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: check_join_state: join-9: Integration of 2 peers complete: do_dc_join_filter_offer
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_INTEGRATED: [ state=S_INTEGRATION cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes responded to the join offer.
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_START
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started Finalization Timer (I_ELECTION:1800000ms), src=280
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_FINALIZE
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_finalize: Finializing join-9 for 2 clients
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: info: do_dc_join_finalize: join-9: Syncing the CIB from Cluster-Server-2 to the rest of the cluster
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: debug: sync_our_cib: Syncing CIB to all peers
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/242, version=0.14.12): ok (rc=0)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by finalize_sync_callback in state: S_FINALIZE_JOIN
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: check_join_state: join-9: Still waiting on 2 integrated nodes
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: finalize_sync_callback: Notifying 2 clients of join-9 results
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: finalize_join_for: join-9: ACK'ing join request from Cluster-Server-1, state member
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: finalize_join_for: join-9: ACK'ing join request from Cluster-Server-2, state member
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: handle_request: Raising I_JOIN_RESULT: join-9
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_cl_join_finalize_respond: Confirming join join-9: join_ack_nack
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce Target_iscsi3 after complete monitor op (interval=0)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_NFS_Server:1 after complete start op (interval=0)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_NFS_Server:1 after complete monitor op (interval=30000)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce LVM_drive after complete monitor op (interval=0)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce Lun_iscsi1 after complete monitor op (interval=0)
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/243, version=0.14.13): ok (rc=0)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce Lun_iscsi2 after complete monitor op (interval=0)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_PingD:1 after complete monitor op (interval=0)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_PingD:1 after complete monitor op (interval=10000)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_Device_drive:1 after complete monitor op (interval=0)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_Device_drive:1 after complete monitor op (interval=20000)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce Target_iscsi1 after complete monitor op (interval=0)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce Target_iscsi2 after complete monitor op (interval=0)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_iSCSI_Daemon:1 after complete monitor op (interval=0)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_iSCSI_Daemon:1 after complete monitor op (interval=30000)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce Lun_iscsi3 after complete monitor op (interval=0)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_cl_join_finalize_respond: join-9: Join complete.  Sending local LRM status to Cluster-Server-2
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_ack: Ignoring op=join_ack_nack message from Cluster-Server-2
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: info: do_dc_join_ack: join-9: Updating node state to member for Cluster-Server-2
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: info: erase_status_tag: Deleting xpath: //node_state[@uname='Cluster-Server-2']/lrm
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/244, version=0.14.14): ok (rc=0)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_ack: join-9: Registered callback for LRM update 246
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-2']/lrm (/cib/status/node_state[2]/lrm)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-2']/lrm (origin=local/crmd/245, version=0.14.15): ok (rc=0)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: erase_xpath_callback: Deletion of "//node_state[@uname='Cluster-Server-2']/lrm": ok (rc=0)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: join_update_complete_callback: Join update 246 complete
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by join_update_complete_callback in state: S_FINALIZE_JOIN
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: check_join_state: join-9: Still waiting on 1 finalized nodes
drbd(p_Device_drive:1)[837]:	2012/09/10_15:26:28 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: info: do_dc_join_ack: join-9: Updating node state to member for Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: info: erase_status_tag: Deleting xpath: //node_state[@uname='Cluster-Server-1']/lrm
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_ack: join-9: Registered callback for LRM update 248
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: Managed write_cib_contents process 836 exited with return code 0.
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-1']/lrm (/cib/status/node_state[1]/lrm)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:26:28 Cluster-Server-2 crm_attribute: [868]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-1']/lrm (origin=local/crmd/247, version=0.14.17): ok (rc=0)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: erase_xpath_callback: Deletion of "//node_state[@uname='Cluster-Server-1']/lrm": ok (rc=0)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: join_update_complete_callback: Join update 248 complete
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by join_update_complete_callback in state: S_FINALIZE_JOIN
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: check_join_state: join-9 complete: join_update_complete_callback
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_FINALIZED: [ state=S_FINALIZE_JOIN cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-1: true (overwrite=true) hash_size=2
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-2: true (overwrite=true) hash_size=2
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes are eligible to run resources.
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_FINAL
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_final: Ensuring DC, quorum and node attributes are up-to-date
Sep 10 15:26:28 Cluster-Server-2 attrd: [40195]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
Sep 10 15:26:28 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_Device_drive:1 (10000)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: attrd_update_delegate: Sent update: (null)=(null) for localhost
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: crm_update_quorum: Updating quorum status to true (call=251)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_CANCEL
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_te_invoke: Cancelling the transition: inactive
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: do_te_invoke:162 - Triggered transition abort (complete=1) : Peer Cancelled
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 252: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/249, version=0.14.19): ok (rc=0)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.14.18 -> 0.14.19 (S_POLICY_ENGINE)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.14.19 -> 0.14.20 (S_POLICY_ENGINE)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.14.20 -> 0.14.21 (S_POLICY_ENGINE)
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/251, version=0.14.21): ok (rc=0)
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='master-p_Device_drive:1'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[3])
Sep 10 15:26:28 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-master-p_Device_drive.1" name="master-p_Device_drive:1" value="10000" />
Sep 10 15:26:28 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: pingd (100)
Sep 10 15:26:28 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:26:28 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-probe_complete" name="probe_complete" value="true" />
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: unpack_config: Startup probes: enabled
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH timeout: 60000
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH of failed nodes is disabled
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Stop all active resources: false
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Default stickiness: 0
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: notice: unpack_config: On loss of CCM Quorum: Ignore
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: unpack_domains: Unpacking domains
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke_callback: Invoking the PE: query=252, ref=pe_calc-dc-1347283588-147, seq=312, quorate=1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-1 is online
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-2 is online
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: LVM_drive_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource LVM_drive active on Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: process_orphan_resource: Detected orphan resource Lun_iscsi1 on Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: create_fake_resource: Orphan resource <primitive id="Lun_iscsi1" type="SCSTLun" class="ocf" provider="nas" />
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_Device_drive:0_last_failure_0 on Cluster-Server-1 returned 8 (master) instead of the expected value: 7 (not running)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Operation monitor found resource p_Device_drive:0 active in master mode on Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:26:28 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-pingd" name="pingd" value="100" />
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 130 for master-p_Device_drive:1=10000 passed
Sep 10 15:26:28 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 132 for probe_complete=true passed
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: process_orphan_resource: Detected orphan resource Target_iscsi1 on Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: create_fake_resource: Orphan resource <primitive id="Target_iscsi1" type="SCSTTarget" class="ocf" provider="nas" />
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_Device_drive:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.14.21 -> 0.14.22 (S_POLICY_ENGINE)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.14.22 -> 0.14.23 (S_POLICY_ENGINE)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: NFS_Server [p_NFS_Server]
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: iSCSI_Daemon [p_iSCSI_Daemon]
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: PingD [p_PingD]
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: clone_print:  Master/Slave Set: Device_drive [p_Device_drive]
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:0 active on Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:0 active on Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: short_print:      Masters: [ Cluster-Server-1 ]
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: short_print:      Slaves: [ Cluster-Server-2 ]
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: native_print: LVM_drive	(ocf::nas:LVM2):	Started Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: iSCSI_iscsi2
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: native_print:      Target_iscsi2	(ocf::nas:SCSTTarget):	Started Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: native_print:      Lun_iscsi2	(ocf::nas:SCSTLun):	Started Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: iSCSI_iscsi3
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: native_print:      Target_iscsi3	(ocf::nas:SCSTTarget):	Started Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: native_print:      Lun_iscsi3	(ocf::nas:SCSTLun):	Started Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: NFS_nfs1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: native_print:      FS_nfs1	(ocf::nas:Filesystem):	Stopped 
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: native_print:      ExportFS_nfs1	(ocf::nas:exportfs):	Stopped 
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:26:28 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 134 for pingd=100 passed
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:0: preferring current location (node=Cluster-Server-1, weight=1)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:1: preferring current location (node=Cluster-Server-2, weight=1)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.14.23 -> 0.14.24 (S_POLICY_ENGINE)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_NFS_Server:0
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_NFS_Server:1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 NFS_Server instances of a possible 2
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_iSCSI_Daemon:0
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_iSCSI_Daemon:1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 iSCSI_Daemon instances of a possible 2
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_PingD:0
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_PingD:1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 PingD instances of a possible 2
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_Device_drive:0
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_Device_drive:1
Sep 10 15:26:28 Cluster-Server-2 crm_attribute: [868]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 Device_drive instances of a possible 2
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:0 master score: 10150
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: master_color: Promoting p_Device_drive:0 (Master Cluster-Server-1)
Sep 10 15:26:28 Cluster-Server-2 crm_attribute: [868]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:1 master score: 10100
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: master_color: Device_drive: Promoted 1 instances of a possible 1 to master
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to LVM_drive
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Target_iscsi2
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Lun_iscsi2
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Target_iscsi3
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Lun_iscsi3
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to FS_nfs1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to ExportFS_nfs1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Lun_iscsi1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Lun_iscsi1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Target_iscsi1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Target_iscsi1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_create_probe: Probing FS_nfs1 on Cluster-Server-1 (Stopped)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_create_probe: Probing ExportFS_nfs1 on Cluster-Server-1 (Stopped)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_create_probe: Probing FS_nfs1 on Cluster-Server-2 (Stopped)
Sep 10 15:26:28 Cluster-Server-2 crm_attribute: [868]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_create_probe: Probing ExportFS_nfs1 on Cluster-Server-2 (Stopped)
Sep 10 15:26:28 Cluster-Server-2 crm_attribute: [868]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:26:28 Cluster-Server-2 crm_attribute: [868]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:26:28 Cluster-Server-2 crm_attribute: [868]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:26:28 Cluster-Server-2 crm_attribute: [868]: info: determine_host: Mapped Cluster-Server-2 to Cluster-Server-2
Sep 10 15:26:28 Cluster-Server-2 crm_attribute: [868]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:26:28 Cluster-Server-2 crm_attribute: [868]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:26:28 Cluster-Server-2 crm_attribute: [868]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:1=10000 for Cluster-Server-2
Sep 10 15:26:28 Cluster-Server-2 crm_attribute: [868]: info: attrd_lazy_update: Updated master-p_Device_drive:1=10000 for Cluster-Server-2
Sep 10 15:26:28 Cluster-Server-2 crm_attribute: [868]: info: main: Update master-p_Device_drive:1=10000 sent via attrd
Sep 10 15:26:28 Cluster-Server-2 crm_attribute: [868]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:26:28 Cluster-Server-2 crm_attribute: [868]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:26:28 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:1=10000
Sep 10 15:26:28 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: master_create_actions: Creating actions for Device_drive
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: debug: xmlfromIPC: Peer disconnected
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: RecurringOp:  Start recurring monitor (20s) for FS_nfs1 on Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: RecurringOp:  Start recurring monitor (10s) for ExportFS_nfs1 on Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:0	(Started Cluster-Server-1)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:1	(Started Cluster-Server-2)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:0	(Started Cluster-Server-1)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:1	(Started Cluster-Server-2)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:0	(Started Cluster-Server-1)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:1	(Started Cluster-Server-2)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_Device_drive:0	(Master Cluster-Server-1)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_Device_drive:1	(Slave Cluster-Server-2)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   LVM_drive	(Started Cluster-Server-1)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Target_iscsi2	(Started Cluster-Server-1)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Lun_iscsi2	(Started Cluster-Server-1)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Target_iscsi3	(Started Cluster-Server-1)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Lun_iscsi3	(Started Cluster-Server-1)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: notice: LogActions: Start   FS_nfs1	(Cluster-Server-1)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: notice: LogActions: Start   ExportFS_nfs1	(Cluster-Server-1)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_PE_SUCCESS from handle_response() received in state S_POLICY_ENGINE
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log <create_reply_adv origin="process_pe_message" t="crmd" version="3.0.6" subt="response" reference="pe_calc-dc-1347283588-147" crm_task="pe_calc" crm_sys_to="dc" crm_sys_from="pengine" crm-tgraph-in="/var/lib/pengine/pe-input-15.bz2" graph-errors="0" graph-warnings="0" config-errors="0" config-warnings="0" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   <crm_xml >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log     <transition_graph cluster-delay="60s" stonith-timeout="60s" failed-stop-offset="INFINITY" failed-start-offset="INFINITY" batch-limit="30" transition_id="15" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="0" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="99" operation="running" operation_key="NFS_nfs1_running_0" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="94" operation="start" operation_key="FS_nfs1_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="96" operation="start" operation_key="ExportFS_nfs1_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="98" operation="start" operation_key="NFS_nfs1_start_0" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="1" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="98" operation="start" operation_key="NFS_nfs1_start_0" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="2" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="95" operation="monitor" operation_key="FS_nfs1_monitor_20000" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="FS_nfs1" long-id="NFS_nfs1:FS_nfs1" class="ocf" provider="nas" type="Filesystem" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_interval="20000" CRM_meta_name="monitor" CRM_meta_timeout="40000" crm_feature_set="3.0.6" device="/dev/drive-CSD/nfs1_NFS" directory="/volumes/nfs1" force_clones="false" fstype="xfs" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="94" operation="start" operation_key="FS_nfs1_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="3" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="94" operation="start" operation_key="FS_nfs1_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="FS_nfs1" long-id="NFS_nfs1:FS_nfs1" class="ocf" provider="nas" type="Filesystem" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_name="start" CRM_meta_timeout="60000" crm_feature_set="3.0.6" device="/dev/drive-CSD/nfs1_NFS" directory="/volumes/nfs1" force_clones="false" fstype="xfs" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="14" operation="probe_complete" operation_key="probe_complete" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="98" operation="start" operation_key="NFS_nfs1_start_0" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="4" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="19" operation="monitor" operation_key="FS_nfs1_monitor_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="FS_nfs1" long-id="NFS_nfs1:FS_nfs1" class="ocf" provider="nas" type="Filesystem" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" crm_feature_set="3.0.6" device="/dev/drive-CSD/nfs1_NFS" directory="/volumes/nfs1" force_clones="false" fstype="xfs" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="5" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="16" operation="monitor" operation_key="FS_nfs1_monitor_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="FS_nfs1" long-id="NFS_nfs1:FS_nfs1" class="ocf" provider="nas" type="Filesystem" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" crm_feature_set="3.0.6" device="/dev/drive-CSD/nfs1_NFS" directory="/volumes/nfs1" force_clones="false" fstype="xfs" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="6" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="97" operation="monitor" operation_key="ExportFS_nfs1_monitor_10000" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="ExportFS_nfs1" long-id="NFS_nfs1:ExportFS_nfs1" class="ocf" provider="nas" type="exportfs" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_interval="10000" CRM_meta_name="monitor" CRM_meta_timeout="20000" clientspec="*" crm_feature_set="3.0.6" directory="/volumes/nfs1" fsid="1955f364-fb4b-11e1-b02e-000c290247c7" options="rw,insecure,async,no_subtree_check,root_squash,no_all_squash,anonuid=1000,anongid=100" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="96" operation="start" operation_key="ExportFS_nfs1_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="7" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="96" operation="start" operation_key="ExportFS_nfs1_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="ExportFS_nfs1" long-id="NFS_nfs1:ExportFS_nfs1" class="ocf" provider="nas" type="exportfs" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_name="start" CRM_meta_timeout="40000" clientspec="*" crm_feature_set="3.0.6" directory="/volumes/nfs1" fsid="1955f364-fb4b-11e1-b02e-000c290247c7" options="rw,insecure,async,no_subtree_check,root_squash,no_all_squash,anonuid=1000,anongid=100" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="14" operation="probe_complete" operation_key="probe_complete" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="94" operation="start" operation_key="FS_nfs1_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="98" operation="start" operation_key="NFS_nfs1_start_0" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="8" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="20" operation="monitor" operation_key="ExportFS_nfs1_monitor_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="ExportFS_nfs1" long-id="NFS_nfs1:ExportFS_nfs1" class="ocf" provider="nas" type="exportfs" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" clientspec="*" crm_feature_set="3.0.6" directory="/volumes/nfs1" fsid="1955f364-fb4b-11e1-b02e-000c290247c7" options="rw,insecure,async,no_subtree_check,root_squash,no_all_squash,anonuid=1000,anongid=100" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="9" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="17" operation="monitor" operation_key="ExportFS_nfs1_monitor_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="ExportFS_nfs1" long-id="NFS_nfs1:ExportFS_nfs1" class="ocf" provider="nas" type="exportfs" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" clientspec="*" crm_feature_set="3.0.6" directory="/volumes/nfs1" fsid="1955f364-fb4b-11e1-b02e-000c290247c7" options="rw,insecure,async,no_subtree_check,root_squash,no_all_squash,anonuid=1000,anongid=100" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="10" priority="1000000" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="18" operation="probe_complete" operation_key="probe_complete" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_no_wait="true" crm_feature_set="3.0.6" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="19" operation="monitor" operation_key="FS_nfs1_monitor_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="20" operation="monitor" operation_key="ExportFS_nfs1_monitor_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="11" priority="1000000" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="15" operation="probe_complete" operation_key="probe_complete" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_no_wait="true" crm_feature_set="3.0.6" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="16" operation="monitor" operation_key="FS_nfs1_monitor_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="17" operation="monitor" operation_key="ExportFS_nfs1_monitor_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="12" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="14" operation="probe_complete" operation_key="probe_complete" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes crm_feature_set="3.0.6" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="15" operation="probe_complete" operation_key="probe_complete" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="18" operation="probe_complete" operation_key="probe_complete" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log     </transition_graph>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   </crm_xml>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log </create_reply_adv>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_INVOKE
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: unpack_graph: Unpacked transition 15: 13 actions in 13 synapses
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: info: do_te_invoke: Processing graph 15 (ref=pe_calc-dc-1347283588-147) derived from /var/lib/pengine/pe-input-15.bz2
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: te_pseudo_action: Pseudo action 98 fired and confirmed
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 19: monitor FS_nfs1_monitor_0 on Cluster-Server-2 (local)
Sep 10 15:26:28 Cluster-Server-2 lrmd: [40194]: debug: on_msg_add_rsc:client [40197] adds resource FS_nfs1
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_lrm_rsc_op: Performing key=19:15:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=FS_nfs1_monitor_0
Sep 10 15:26:28 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op:2399: copying parameters for rsc FS_nfs1
Sep 10 15:26:28 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op: add an operation operation monitor[20] on FS_nfs1 for client 40197, its parameters: crm_feature_set=[3.0.6] device=[/dev/drive-CSD/nfs1_NFS] directory=[/volumes/nfs1] force_clones=[false] fstype=[xfs] CRM_meta_timeout=[20000]  to the operation list.
Sep 10 15:26:28 Cluster-Server-2 lrmd: [40194]: info: rsc:FS_nfs1 probe[20] (pid 873)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 16: monitor FS_nfs1_monitor_0 on Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 20: monitor ExportFS_nfs1_monitor_0 on Cluster-Server-2 (local)
Sep 10 15:26:28 Cluster-Server-2 lrmd: [40194]: debug: on_msg_add_rsc:client [40197] adds resource ExportFS_nfs1
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_lrm_rsc_op: Performing key=20:15:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=ExportFS_nfs1_monitor_0
Sep 10 15:26:28 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op:2399: copying parameters for rsc ExportFS_nfs1
Sep 10 15:26:28 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op: add an operation operation monitor[21] on ExportFS_nfs1 for client 40197, its parameters: crm_feature_set=[3.0.6] options=[rw,insecure,async,no_subtree_check,root_squash,no_all_squash,anonuid=1directory=[/volumes/nfs1] fsid=[1955f364-fb4b-11e1-b02e-000c290247c7] clientspec=[*] CRM_meta_timeout=[20000]  to the operation list.
Sep 10 15:26:28 Cluster-Server-2 lrmd: [40194]: info: rsc:ExportFS_nfs1 probe[21] (pid 874)
Sep 10 15:26:28 Cluster-Server-2 lrmd: [40194]: WARN: Managed FS_nfs1:monitor process 873 exited with return code 5.
Sep 10 15:26:28 Cluster-Server-2 lrmd: [40194]: info: operation monitor[20] on FS_nfs1 for client 40197: pid 873 exited with return code 5
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 17: monitor ExportFS_nfs1_monitor_0 on Cluster-Server-1
drbd(p_Device_drive:1)[837]:	2012/09/10_15:26:28 DEBUG: drive: Exit code 0
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 15 (Complete=0, Pending=4, Fired=5, Skipped=0, Incomplete=8, Source=/var/lib/pengine/pe-input-15.bz2): In-progress
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: do_update_resource: Updating resouce FS_nfs1 after complete monitor op (interval=0)
Sep 10 15:26:28 Cluster-Server-2 lrmd: [40194]: WARN: Managed ExportFS_nfs1:monitor process 874 exited with return code 5.
drbd(p_Device_drive:1)[837]:	2012/09/10_15:26:28 DEBUG: drive: Command output: 
Sep 10 15:26:28 Cluster-Server-2 lrmd: [40194]: info: operation monitor[21] on ExportFS_nfs1 for client 40197: pid 874 exited with return code 5
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: notice: process_pe_message: Transition 15: PEngine Input stored in: /var/lib/pengine/pe-input-15.bz2
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='master-p_Device_drive:0'] does not exist
Sep 10 15:26:28 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update -22 for master-p_Device_drive:0=(null) passed
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:26:28 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-pingd" name="pingd" value="100" />
Sep 10 15:26:28 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:26:28 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-probe_complete" name="probe_complete" value="true" />
Sep 10 15:26:28 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 137 for pingd=100 passed
Sep 10 15:26:28 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 139 for probe_complete=true passed
Sep 10 15:26:28 Cluster-Server-2 lrmd: [40194]: ERROR: get_resource_meta: pclose failed: Interrupted system call
Sep 10 15:26:28 Cluster-Server-2 lrmd: [40194]: WARN: on_msg_get_metadata: empty metadata for ocf::nas::Filesystem.
Sep 10 15:26:28 Cluster-Server-2 lrmd: [40194]: WARN: G_SIG_dispatch: Dispatch function for SIGCHLD was delayed 200 ms (> 100 ms) before being called (GSource: 0x1210ef0)
Sep 10 15:26:28 Cluster-Server-2 lrmd: [40194]: info: G_SIG_dispatch: started at 4297231418 should have started at 4297231398
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: ERROR: lrm_get_rsc_type_metadata(578): got a return code HA_FAIL from a reply message of rmetadata with function get_ret_from_msg.
Sep 10 15:26:28 Cluster-Server-2 lrmd: [40194]: info: operation monitor[11] on p_Device_drive:1 for client 40197: pid 837 exited with return code 0
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: WARN: get_rsc_metadata: No metadata found for Filesystem::ocf:nas
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: ERROR: string2xml: Can't parse NULL input
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: ERROR: get_rsc_restart_list: Metadata for nas::ocf:Filesystem is not valid XML
Sep 10 15:26:28 Cluster-Server-2 lrmd: [40194]: debug: RA output: (p_Device_drive:1:monitor:stdout) 

Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: info: process_lrm_event: LRM operation FS_nfs1_monitor_0 (call=20, rc=5, cib-update=253, confirmed=true) not installed
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: update_history_cache: Appending monitor op to history for 'FS_nfs1'
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: do_update_resource: Updating resouce ExportFS_nfs1 after complete monitor op (interval=0)
Sep 10 15:26:28 Cluster-Server-2 lrmd: [40194]: ERROR: get_resource_meta: pclose failed: Resource temporarily unavailable
Sep 10 15:26:28 Cluster-Server-2 lrmd: [40194]: WARN: on_msg_get_metadata: empty metadata for ocf::nas::exportfs.
Sep 10 15:26:28 Cluster-Server-2 lrmd: [40194]: WARN: G_SIG_dispatch: Dispatch function for SIGCHLD was delayed 200 ms (> 100 ms) before being called (GSource: 0x1210ef0)
Sep 10 15:26:28 Cluster-Server-2 lrmd: [40194]: info: G_SIG_dispatch: started at 4297231438 should have started at 4297231418
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: ERROR: lrm_get_rsc_type_metadata(578): got a return code HA_FAIL from a reply message of rmetadata with function get_ret_from_msg.
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: WARN: get_rsc_metadata: No metadata found for exportfs::ocf:nas
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: ERROR: string2xml: Can't parse NULL input
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: ERROR: get_rsc_restart_list: Metadata for nas::ocf:exportfs is not valid XML
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: info: process_lrm_event: LRM operation ExportFS_nfs1_monitor_0 (call=21, rc=5, cib-update=254, confirmed=true) not installed
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: update_history_cache: Appending monitor op to history for 'ExportFS_nfs1'
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 15 (Complete=1, Pending=4, Fired=0, Skipped=0, Incomplete=8, Source=/var/lib/pengine/pe-input-15.bz2): In-progress
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.14.24 -> 0.14.25 (S_TRANSITION_ENGINE)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.14.25 -> 0.14.26 (S_TRANSITION_ENGINE)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.14.26 -> 0.14.27 (S_TRANSITION_ENGINE)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.14.27 -> 0.14.28 (S_TRANSITION_ENGINE)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.14.28 -> 0.14.29 (S_TRANSITION_ENGINE)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.14.29 -> 0.14.30 (S_TRANSITION_ENGINE)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.14.30 -> 0.14.31 (S_TRANSITION_ENGINE)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.14.31 -> 0.14.32 (S_TRANSITION_ENGINE)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action ExportFS_nfs1_monitor_0 (17) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.14.32 -> 0.14.33 (S_TRANSITION_ENGINE)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action FS_nfs1_monitor_0 (16) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.14.33 -> 0.14.34 (S_TRANSITION_ENGINE)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: WARN: status_from_rc: Action 19 (FS_nfs1_monitor_0) on Cluster-Server-2 failed (target: 7 vs. rc: 5): Error
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: match_graph_event:277 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=FS_nfs1_last_failure_0, magic=0:5;19:15:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66, cib=0.14.34) : Event failed
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: update_abort_priority: Abort priority upgraded from 0 to 1
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: update_abort_priority: Abort action done superceeded by restart
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action FS_nfs1_monitor_0 (19) confirmed on Cluster-Server-2 (rc=4)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.14.34 -> 0.14.35 (S_TRANSITION_ENGINE)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: WARN: status_from_rc: Action 20 (ExportFS_nfs1_monitor_0) on Cluster-Server-2 failed (target: 7 vs. rc: 5): Error
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: match_graph_event:277 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=ExportFS_nfs1_last_failure_0, magic=0:5;20:15:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66, cib=0.14.35) : Event failed
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action ExportFS_nfs1_monitor_0 (20) confirmed on Cluster-Server-2 (rc=4)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 18: probe_complete probe_complete on Cluster-Server-2 (local) - no waiting
Sep 10 15:26:28 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from crmd: probe_complete=true
Sep 10 15:26:28 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: true, Current: true, Stored: true
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: attrd_update_delegate: Sent update: probe_complete=true for localhost
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 15: probe_complete probe_complete on Cluster-Server-1 - no waiting
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 15 (Complete=5, Pending=0, Fired=2, Skipped=6, Incomplete=0, Source=/var/lib/pengine/pe-input-15.bz2): In-progress
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: notice: run_graph: ==== Transition 15 (Complete=7, Pending=0, Fired=0, Skipped=6, Incomplete=0, Source=/var/lib/pengine/pe-input-15.bz2): Stopped
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: te_graph_trigger: Transition 15 is now complete
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Transition 15 status: restart - Event failed
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes are eligible to run resources.
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 255: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke_callback: Invoking the PE: query=255, ref=pe_calc-dc-1347283588-154, seq=312, quorate=1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: unpack_config: Startup probes: enabled
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH timeout: 60000
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH of failed nodes is disabled
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Stop all active resources: false
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Default stickiness: 0
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: notice: unpack_config: On loss of CCM Quorum: Ignore
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: unpack_domains: Unpacking domains
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-1 is online
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-2 is online
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: LVM_drive_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource LVM_drive active on Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: process_orphan_resource: Detected orphan resource Lun_iscsi1 on Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: create_fake_resource: Orphan resource <primitive id="Lun_iscsi1" type="SCSTLun" class="ocf" provider="nas" />
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_Device_drive:0_last_failure_0 on Cluster-Server-1 returned 8 (master) instead of the expected value: 7 (not running)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Operation monitor found resource p_Device_drive:0 active in master mode on Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: process_orphan_resource: Detected orphan resource Target_iscsi1 on Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: create_fake_resource: Orphan resource <primitive id="Target_iscsi1" type="SCSTTarget" class="ocf" provider="nas" />
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_Device_drive:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: FS_nfs1_last_failure_0 on Cluster-Server-2 returned 5 (not installed) instead of the expected value: 7 (not running)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Preventing FS_nfs1 from re-starting on Cluster-Server-2: operation monitor failed 'not installed' (rc=5)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: ExportFS_nfs1_last_failure_0 on Cluster-Server-2 returned 5 (not installed) instead of the expected value: 7 (not running)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Preventing ExportFS_nfs1 from re-starting on Cluster-Server-2: operation monitor failed 'not installed' (rc=5)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: NFS_Server [p_NFS_Server]
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: iSCSI_Daemon [p_iSCSI_Daemon]
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: PingD [p_PingD]
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: clone_print:  Master/Slave Set: Device_drive [p_Device_drive]
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:0 active on Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:0 active on Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: short_print:      Masters: [ Cluster-Server-1 ]
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: short_print:      Slaves: [ Cluster-Server-2 ]
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: native_print: LVM_drive	(ocf::nas:LVM2):	Started Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: iSCSI_iscsi2
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: native_print:      Target_iscsi2	(ocf::nas:SCSTTarget):	Started Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: native_print:      Lun_iscsi2	(ocf::nas:SCSTLun):	Started Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: iSCSI_iscsi3
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: native_print:      Target_iscsi3	(ocf::nas:SCSTTarget):	Started Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: native_print:      Lun_iscsi3	(ocf::nas:SCSTLun):	Started Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: NFS_nfs1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: native_print:      FS_nfs1	(ocf::nas:Filesystem):	Stopped 
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: native_print:      ExportFS_nfs1	(ocf::nas:exportfs):	Stopped 
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:0: preferring current location (node=Cluster-Server-1, weight=1)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:1: preferring current location (node=Cluster-Server-2, weight=1)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_NFS_Server:0
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_NFS_Server:1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 NFS_Server instances of a possible 2
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_iSCSI_Daemon:0
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_iSCSI_Daemon:1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 iSCSI_Daemon instances of a possible 2
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_PingD:0
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_PingD:1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 PingD instances of a possible 2
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_Device_drive:0
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_Device_drive:1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 Device_drive instances of a possible 2
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:0 master score: 10150
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: master_color: Promoting p_Device_drive:0 (Master Cluster-Server-1)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:1 master score: 10100
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: master_color: Device_drive: Promoted 1 instances of a possible 1 to master
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to LVM_drive
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Target_iscsi2
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Lun_iscsi2
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Target_iscsi3
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Lun_iscsi3
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to FS_nfs1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to ExportFS_nfs1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Lun_iscsi1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Lun_iscsi1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Target_iscsi1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Target_iscsi1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: debug: master_create_actions: Creating actions for Device_drive
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: RecurringOp:  Start recurring monitor (20s) for FS_nfs1 on Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: RecurringOp:  Start recurring monitor (10s) for ExportFS_nfs1 on Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:0	(Started Cluster-Server-1)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:1	(Started Cluster-Server-2)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:0	(Started Cluster-Server-1)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:1	(Started Cluster-Server-2)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:0	(Started Cluster-Server-1)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:1	(Started Cluster-Server-2)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_Device_drive:0	(Master Cluster-Server-1)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_Device_drive:1	(Slave Cluster-Server-2)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   LVM_drive	(Started Cluster-Server-1)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Target_iscsi2	(Started Cluster-Server-1)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Lun_iscsi2	(Started Cluster-Server-1)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Target_iscsi3	(Started Cluster-Server-1)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Lun_iscsi3	(Started Cluster-Server-1)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: notice: LogActions: Start   FS_nfs1	(Cluster-Server-1)
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: notice: LogActions: Start   ExportFS_nfs1	(Cluster-Server-1)
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_PE_SUCCESS from handle_response() received in state S_POLICY_ENGINE
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log <create_reply_adv origin="process_pe_message" t="crmd" version="3.0.6" subt="response" reference="pe_calc-dc-1347283588-154" crm_task="pe_calc" crm_sys_to="dc" crm_sys_from="pengine" crm-tgraph-in="/var/lib/pengine/pe-input-16.bz2" graph-errors="0" graph-warnings="0" config-errors="0" config-warnings="0" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   <crm_xml >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log     <transition_graph cluster-delay="60s" stonith-timeout="60s" failed-stop-offset="INFINITY" failed-start-offset="INFINITY" batch-limit="30" transition_id="16" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="0" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="95" operation="running" operation_key="NFS_nfs1_running_0" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="90" operation="start" operation_key="FS_nfs1_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="92" operation="start" operation_key="ExportFS_nfs1_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="94" operation="start" operation_key="NFS_nfs1_start_0" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="1" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="94" operation="start" operation_key="NFS_nfs1_start_0" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="2" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="91" operation="monitor" operation_key="FS_nfs1_monitor_20000" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="FS_nfs1" long-id="NFS_nfs1:FS_nfs1" class="ocf" provider="nas" type="Filesystem" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_interval="20000" CRM_meta_name="monitor" CRM_meta_timeout="40000" crm_feature_set="3.0.6" device="/dev/drive-CSD/nfs1_NFS" directory="/volumes/nfs1" force_clones="false" fstype="xfs" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="90" operation="start" operation_key="FS_nfs1_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="3" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="90" operation="start" operation_key="FS_nfs1_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="FS_nfs1" long-id="NFS_nfs1:FS_nfs1" class="ocf" provider="nas" type="Filesystem" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_name="start" CRM_meta_timeout="60000" crm_feature_set="3.0.6" device="/dev/drive-CSD/nfs1_NFS" directory="/volumes/nfs1" force_clones="false" fstype="xfs" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="94" operation="start" operation_key="NFS_nfs1_start_0" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="4" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="93" operation="monitor" operation_key="ExportFS_nfs1_monitor_10000" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="ExportFS_nfs1" long-id="NFS_nfs1:ExportFS_nfs1" class="ocf" provider="nas" type="exportfs" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_interval="10000" CRM_meta_name="monitor" CRM_meta_timeout="20000" clientspec="*" crm_feature_set="3.0.6" directory="/volumes/nfs1" fsid="1955f364-fb4b-11e1-b02e-000c290247c7" options="rw,insecure,async,no_subtree_check,root_squash,no_all_squash,anonuid=1000,anongid=100" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="92" operation="start" operation_key="ExportFS_nfs1_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="5" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="92" operation="start" operation_key="ExportFS_nfs1_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="ExportFS_nfs1" long-id="NFS_nfs1:ExportFS_nfs1" class="ocf" provider="nas" type="exportfs" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_name="start" CRM_meta_timeout="40000" clientspec="*" crm_feature_set="3.0.6" directory="/volumes/nfs1" fsid="1955f364-fb4b-11e1-b02e-000c290247c7" options="rw,insecure,async,no_subtree_check,root_squash,no_all_squash,anonuid=1000,anongid=100" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="90" operation="start" operation_key="FS_nfs1_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="94" operation="start" operation_key="NFS_nfs1_start_0" />
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log     </transition_graph>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   </crm_xml>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log </create_reply_adv>
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_INVOKE
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: unpack_graph: Unpacked transition 16: 6 actions in 6 synapses
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: info: do_te_invoke: Processing graph 16 (ref=pe_calc-dc-1347283588-154) derived from /var/lib/pengine/pe-input-16.bz2
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: te_pseudo_action: Pseudo action 94 fired and confirmed
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 90: start FS_nfs1_start_0 on Cluster-Server-1
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 16 (Complete=0, Pending=1, Fired=2, Skipped=0, Incomplete=4, Source=/var/lib/pengine/pe-input-16.bz2): In-progress
Sep 10 15:26:28 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 16 (Complete=1, Pending=1, Fired=0, Skipped=0, Incomplete=4, Source=/var/lib/pengine/pe-input-16.bz2): In-progress
Sep 10 15:26:28 Cluster-Server-2 pengine: [40196]: notice: process_pe_message: Transition 16: PEngine Input stored in: /var/lib/pengine/pe-input-16.bz2
Sep 10 15:26:29 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_Device_drive:0 monitor[11] (pid 57648)
drbd(p_Device_drive:0)[57648]:	2012/09/10_15:26:29 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:26:29 Cluster-Server-1 crm_attribute: [57678]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:26:29 Cluster-Server-1 crm_attribute: [57678]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:26:29 Cluster-Server-1 crm_attribute: [57678]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:26:29 Cluster-Server-1 crm_attribute: [57678]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:26:29 Cluster-Server-1 crm_attribute: [57678]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:26:29 Cluster-Server-1 crm_attribute: [57678]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:26:29 Cluster-Server-1 crm_attribute: [57678]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:26:29 Cluster-Server-1 crm_attribute: [57678]: info: determine_host: Mapped Cluster-Server-1 to Cluster-Server-1
Sep 10 15:26:29 Cluster-Server-1 crm_attribute: [57678]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:26:29 Cluster-Server-1 crm_attribute: [57678]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:26:29 Cluster-Server-1 crm_attribute: [57678]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:26:29 Cluster-Server-1 crm_attribute: [57678]: info: attrd_lazy_update: Updated master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:26:29 Cluster-Server-1 crm_attribute: [57678]: info: main: Update master-p_Device_drive:0=10000 sent via attrd
Sep 10 15:26:29 Cluster-Server-1 crm_attribute: [57678]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:26:29 Cluster-Server-1 crm_attribute: [57678]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:26:29 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:0=10000
Sep 10 15:26:29 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:0)[57648]:	2012/09/10_15:26:29 DEBUG: drive: Exit code 0
drbd(p_Device_drive:0)[57648]:	2012/09/10_15:26:29 DEBUG: drive: Command output: 
Sep 10 15:26:29 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_Device_drive:0:monitor:stdout) 

Sep 10 15:26:29 Cluster-Server-1 lrmd: [48712]: info: operation monitor[11] on p_Device_drive:0 for client 48715: pid 57648 exited with return code 8
Sep 10 15:26:30 Cluster-Server-1 lrmd: [48712]: info: Managed FS_nfs1:start process 57591 exited with return code 0.
Sep 10 15:26:30 Cluster-Server-1 lrmd: [48712]: info: operation start[36] on FS_nfs1 for client 48715: pid 57591 exited with return code 0
Sep 10 15:26:30 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce FS_nfs1 after complete start op (interval=0)
Sep 10 15:26:30 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation FS_nfs1_start_0 (call=36, rc=0, cib-update=76, confirmed=true) ok
Sep 10 15:26:30 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending start op to history for 'FS_nfs1'
Sep 10 15:26:30 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=91:16:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=FS_nfs1_monitor_20000
Sep 10 15:26:30 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation monitor[37] on FS_nfs1 for client 48715, its parameters: fstype=[xfs] CRM_meta_timeout=[40000] CRM_meta_name=[monitor] crm_feature_set=[3.0.6] device=[/dev/drive-CSD/nfs1_NFS] force_clones=[false] CRM_meta_interval=[20000] directory=[/volumes/nfs1]  to the operation list.
Sep 10 15:26:30 Cluster-Server-1 lrmd: [48712]: info: rsc:FS_nfs1 monitor[37] (pid 57706)
Sep 10 15:26:30 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=92:16:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=ExportFS_nfs1_start_0
Sep 10 15:26:30 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op:2399: copying parameters for rsc ExportFS_nfs1
Sep 10 15:26:30 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation start[38] on ExportFS_nfs1 for client 48715, its parameters: crm_feature_set=[3.0.6] directory=[/volumes/nfs1] fsid=[1955f364-fb4b-11e1-b02e-000c290247c7] CRM_meta_name=[start] options=[rw,insecure,async,no_subtree_check,root_squash,no_all_squash,anonuid=1clientspec=[*] CRM_meta_timeout=[40000]  to the operation list.
Sep 10 15:26:30 Cluster-Server-1 lrmd: [48712]: info: rsc:ExportFS_nfs1 start[38] (pid 57707)
exportfs(ExportFS_nfs1)[57707]:	2012/09/10_15:26:30 INFO: Directory /volumes/nfs1 is not exported to * (stopped).
exportfs(ExportFS_nfs1)[57707]:	2012/09/10_15:26:30 INFO: Directory /volumes/nfs1 is not exported to * (stopped).
exportfs(ExportFS_nfs1)[57707]:	2012/09/10_15:26:30 INFO: Exporting file system ...
exportfs(ExportFS_nfs1)[57707]:	2012/09/10_15:26:30 INFO: Exporting file system ...
Sep 10 15:26:30 Cluster-Server-1 lrmd: [48712]: info: Managed FS_nfs1:monitor process 57706 exited with return code 0.
Sep 10 15:26:30 Cluster-Server-1 lrmd: [48712]: info: operation monitor[37] on FS_nfs1 for client 48715: pid 57706 exited with return code 0
Sep 10 15:26:30 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce FS_nfs1 after complete monitor op (interval=20000)
Sep 10 15:26:30 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation FS_nfs1_monitor_20000 (call=37, rc=0, cib-update=77, confirmed=false) ok
Sep 10 15:26:30 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending monitor op to history for 'FS_nfs1'
exportfs(ExportFS_nfs1)[57707]:	2012/09/10_15:26:30 INFO: exporting *:/volumes/nfs1
exportfs(ExportFS_nfs1)[57707]:	2012/09/10_15:26:30 INFO: exporting *:/volumes/nfs1
exportfs(ExportFS_nfs1)[57707]:	2012/09/10_15:26:30 WARNING: rmtab backup /volumes/nfs1/.rmtab not found or not readable.
exportfs(ExportFS_nfs1)[57707]:	2012/09/10_15:26:30 WARNING: rmtab backup /volumes/nfs1/.rmtab not found or not readable.
exportfs(ExportFS_nfs1)[57707]:	2012/09/10_15:26:30 INFO: File system exported
exportfs(ExportFS_nfs1)[57707]:	2012/09/10_15:26:30 INFO: File system exported
Sep 10 15:26:30 Cluster-Server-1 lrmd: [48712]: info: Managed ExportFS_nfs1:start process 57707 exited with return code 0.
Sep 10 15:26:30 Cluster-Server-1 lrmd: [48712]: info: operation start[38] on ExportFS_nfs1 for client 48715: pid 57707 exited with return code 0
Sep 10 15:26:30 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce ExportFS_nfs1 after complete start op (interval=0)
Sep 10 15:26:30 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation ExportFS_nfs1_start_0 (call=38, rc=0, cib-update=78, confirmed=true) ok
Sep 10 15:26:30 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending start op to history for 'ExportFS_nfs1'
Sep 10 15:26:30 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=93:16:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=ExportFS_nfs1_monitor_10000
Sep 10 15:26:30 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation monitor[39] on ExportFS_nfs1 for client 48715, its parameters: options=[rw,insecure,async,no_subtree_check,root_squash,no_all_squash,anonuid=1CRM_meta_timeout=[20000] CRM_meta_name=[monitor] fsid=[1955f364-fb4b-11e1-b02e-000c290247c7] crm_feature_set=[3.0.6] CRM_meta_interval=[10000] clientspec=[*] directory=[/volumes/nfs1]  to the operation list.
Sep 10 15:26:30 Cluster-Server-1 lrmd: [48712]: info: rsc:ExportFS_nfs1 monitor[39] (pid 57773)
exportfs(ExportFS_nfs1)[57773]:	2012/09/10_15:26:30 INFO: Directory /volumes/nfs1 is exported to * (started).
exportfs(ExportFS_nfs1)[57773]:	2012/09/10_15:26:30 INFO: Directory /volumes/nfs1 is exported to * (started).
Sep 10 15:26:30 Cluster-Server-1 lrmd: [48712]: info: Managed ExportFS_nfs1:monitor process 57773 exited with return code 0.
Sep 10 15:26:30 Cluster-Server-1 lrmd: [48712]: info: operation monitor[39] on ExportFS_nfs1 for client 48715: pid 57773 exited with return code 0
Sep 10 15:26:30 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce ExportFS_nfs1 after complete monitor op (interval=10000)
Sep 10 15:26:30 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation ExportFS_nfs1_monitor_10000 (call=39, rc=0, cib-update=79, confirmed=false) ok
Sep 10 15:26:30 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending monitor op to history for 'ExportFS_nfs1'
Sep 10 15:26:30 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:26:30 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.14.35 -> 0.14.36 (S_TRANSITION_ENGINE)
Sep 10 15:26:30 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action FS_nfs1_start_0 (90) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:26:30 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 91: monitor FS_nfs1_monitor_20000 on Cluster-Server-1
Sep 10 15:26:30 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 92: start ExportFS_nfs1_start_0 on Cluster-Server-1
Sep 10 15:26:30 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 16 (Complete=2, Pending=2, Fired=2, Skipped=0, Incomplete=2, Source=/var/lib/pengine/pe-input-16.bz2): In-progress
Sep 10 15:26:30 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:26:30 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.14.36 -> 0.14.37 (S_TRANSITION_ENGINE)
Sep 10 15:26:30 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action FS_nfs1_monitor_20000 (91) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:26:30 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 16 (Complete=3, Pending=1, Fired=0, Skipped=0, Incomplete=2, Source=/var/lib/pengine/pe-input-16.bz2): In-progress
Sep 10 15:26:30 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:26:30 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.14.37 -> 0.14.38 (S_TRANSITION_ENGINE)
Sep 10 15:26:30 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action ExportFS_nfs1_start_0 (92) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:26:30 Cluster-Server-2 crmd: [40197]: debug: te_pseudo_action: Pseudo action 95 fired and confirmed
Sep 10 15:26:30 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 93: monitor ExportFS_nfs1_monitor_10000 on Cluster-Server-1
Sep 10 15:26:30 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 16 (Complete=4, Pending=1, Fired=2, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-16.bz2): In-progress
Sep 10 15:26:30 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 16 (Complete=5, Pending=1, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-16.bz2): In-progress
Sep 10 15:26:30 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:26:30 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.14.38 -> 0.14.39 (S_TRANSITION_ENGINE)
Sep 10 15:26:30 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action ExportFS_nfs1_monitor_10000 (93) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:26:30 Cluster-Server-2 crmd: [40197]: notice: run_graph: ==== Transition 16 (Complete=6, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-16.bz2): Complete
Sep 10 15:26:30 Cluster-Server-2 crmd: [40197]: debug: te_graph_trigger: Transition 16 is now complete
Sep 10 15:26:30 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE
Sep 10 15:26:30 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Transition 16 status: done - <null>
Sep 10 15:26:30 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_TE_SUCCESS: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:26:30 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:26:30 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_TE_SUCCESS from notify_crmd() received in state S_TRANSITION_ENGINE
Sep 10 15:26:30 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:26:30 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: Starting PEngine Recheck Timer
Sep 10 15:26:30 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started PEngine Recheck Timer (I_PE_CALC:900000ms), src=301
Sep 10 15:26:30 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:26:30 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:26:30 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:26:34 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_PingD:0 monitor[6] (pid 57801)
Sep 10 15:26:34 Cluster-Server-1 lrmd: [48712]: debug: rsc:Target_iscsi3 monitor[31] (pid 57817)
SCSTTarget(Target_iscsi3)[57817]:	2012/09/10_15:26:34 DEBUG: Target_iscsi3 monitor : 0
Sep 10 15:26:34 Cluster-Server-1 lrmd: [48712]: info: operation monitor[31] on Target_iscsi3 for client 48715: pid 57817 exited with return code 0
Sep 10 15:26:34 Cluster-Server-1 lrmd: [48712]: debug: rsc:Lun_iscsi3 monitor[33] (pid 57823)
SCSTLun(Lun_iscsi3)[57823]:	2012/09/10_15:26:34 INFO: Lun_iscsi3 monitor : 0
SCSTLun(Lun_iscsi3)[57823]:	2012/09/10_15:26:34 INFO: Lun_iscsi3 monitor : 0
Sep 10 15:26:34 Cluster-Server-1 lrmd: [48712]: info: operation monitor[33] on Lun_iscsi3 for client 48715: pid 57823 exited with return code 0
Sep 10 15:26:34 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_PingD:1 monitor[6] (pid 1409)
Sep 10 15:26:35 Cluster-Server-1 lrmd: [48712]: debug: rsc:Target_iscsi2 monitor[21] (pid 57831)
Sep 10 15:26:35 Cluster-Server-1 lrmd: [48712]: debug: rsc:Lun_iscsi2 monitor[23] (pid 57832)
SCSTTarget(Target_iscsi2)[57831]:	2012/09/10_15:26:35 DEBUG: Target_iscsi2 monitor : 0
Sep 10 15:26:35 Cluster-Server-1 lrmd: [48712]: info: operation monitor[21] on Target_iscsi2 for client 48715: pid 57831 exited with return code 0
SCSTLun(Lun_iscsi2)[57832]:	2012/09/10_15:26:35 INFO: Lun_iscsi2 monitor : 0
SCSTLun(Lun_iscsi2)[57832]:	2012/09/10_15:26:35 INFO: Lun_iscsi2 monitor : 0
Sep 10 15:26:35 Cluster-Server-1 lrmd: [48712]: info: operation monitor[23] on Lun_iscsi2 for client 48715: pid 57832 exited with return code 0
Sep 10 15:26:36 Cluster-Server-1 attrd_updater: [57849]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:26:36 Cluster-Server-1 attrd_updater: [57849]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:26:36 Cluster-Server-1 attrd_updater: [57849]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:26:36 Cluster-Server-1 attrd_updater: [57849]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:26:36 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:26:36 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:26:36 Cluster-Server-1 lrmd: [48712]: info: operation monitor[6] on p_PingD:0 for client 48715: pid 57801 exited with return code 0
Sep 10 15:26:36 Cluster-Server-2 attrd_updater: [1678]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:26:36 Cluster-Server-2 attrd_updater: [1678]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:26:36 Cluster-Server-2 attrd_updater: [1678]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:26:36 Cluster-Server-2 attrd_updater: [1678]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:26:36 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:26:36 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:26:36 Cluster-Server-2 lrmd: [40194]: info: operation monitor[6] on p_PingD:1 for client 40197: pid 1409 exited with return code 0
Sep 10 15:26:39 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_Device_drive:0 monitor[11] (pid 58046)
drbd(p_Device_drive:0)[58046]:	2012/09/10_15:26:39 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:26:39 Cluster-Server-1 crm_attribute: [58077]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:26:39 Cluster-Server-1 crm_attribute: [58077]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:26:39 Cluster-Server-1 crm_attribute: [58077]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:26:39 Cluster-Server-1 crm_attribute: [58077]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:26:39 Cluster-Server-1 crm_attribute: [58077]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:26:39 Cluster-Server-1 crm_attribute: [58077]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:26:39 Cluster-Server-1 crm_attribute: [58077]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:26:39 Cluster-Server-1 crm_attribute: [58077]: info: determine_host: Mapped Cluster-Server-1 to Cluster-Server-1
Sep 10 15:26:39 Cluster-Server-1 crm_attribute: [58077]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:26:39 Cluster-Server-1 crm_attribute: [58077]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:26:39 Cluster-Server-1 crm_attribute: [58077]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:26:39 Cluster-Server-1 crm_attribute: [58077]: info: attrd_lazy_update: Updated master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:26:39 Cluster-Server-1 crm_attribute: [58077]: info: main: Update master-p_Device_drive:0=10000 sent via attrd
Sep 10 15:26:39 Cluster-Server-1 crm_attribute: [58077]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:26:39 Cluster-Server-1 crm_attribute: [58077]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:26:39 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:0=10000
Sep 10 15:26:39 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:0)[58046]:	2012/09/10_15:26:39 DEBUG: drive: Exit code 0
drbd(p_Device_drive:0)[58046]:	2012/09/10_15:26:39 DEBUG: drive: Command output: 
Sep 10 15:26:39 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_Device_drive:0:monitor:stdout) 

Sep 10 15:26:39 Cluster-Server-1 lrmd: [48712]: info: operation monitor[11] on p_Device_drive:0 for client 48715: pid 58046 exited with return code 8
Sep 10 15:26:40 Cluster-Server-1 lrmd: [48712]: debug: rsc:ExportFS_nfs1 monitor[39] (pid 58350)
exportfs(ExportFS_nfs1)[58350]:	2012/09/10_15:26:41 INFO: Directory /volumes/nfs1 is exported to * (started).
exportfs(ExportFS_nfs1)[58350]:	2012/09/10_15:26:41 INFO: Directory /volumes/nfs1 is exported to * (started).
Sep 10 15:26:41 Cluster-Server-1 lrmd: [48712]: info: operation monitor[39] on ExportFS_nfs1 for client 48715: pid 58350 exited with return code 0
Sep 10 15:26:44 Cluster-Server-1 lrmd: [48712]: debug: rsc:Target_iscsi3 monitor[31] (pid 58368)
Sep 10 15:26:44 Cluster-Server-1 lrmd: [48712]: debug: rsc:Lun_iscsi3 monitor[33] (pid 58371)
SCSTTarget(Target_iscsi3)[58368]:	2012/09/10_15:26:44 DEBUG: Target_iscsi3 monitor : 0
Sep 10 15:26:44 Cluster-Server-1 lrmd: [48712]: info: operation monitor[31] on Target_iscsi3 for client 48715: pid 58368 exited with return code 0
SCSTLun(Lun_iscsi3)[58371]:	2012/09/10_15:26:44 INFO: Lun_iscsi3 monitor : 0
SCSTLun(Lun_iscsi3)[58371]:	2012/09/10_15:26:44 INFO: Lun_iscsi3 monitor : 0
Sep 10 15:26:44 Cluster-Server-1 lrmd: [48712]: info: operation monitor[33] on Lun_iscsi3 for client 48715: pid 58371 exited with return code 0
Sep 10 15:26:45 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_NFS_Server:0 monitor[8] (pid 58382)
Sep 10 15:26:45 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_iSCSI_Daemon:0 monitor[5] (pid 58383)
Sep 10 15:26:45 Cluster-Server-1 lrmd: [48712]: info: operation monitor[8] on p_NFS_Server:0 for client 48715: pid 58382 exited with return code 0
Sep 10 15:26:45 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_NFS_Server:0:monitor:stdout) nfsd running

Sep 10 15:26:45 Cluster-Server-1 lrmd: [48712]: info: operation monitor[5] on p_iSCSI_Daemon:0 for client 48715: pid 58383 exited with return code 0
Sep 10 15:26:45 Cluster-Server-1 lrmd: [48712]: debug: rsc:Target_iscsi2 monitor[21] (pid 58388)
Sep 10 15:26:45 Cluster-Server-1 lrmd: [48712]: debug: rsc:Lun_iscsi2 monitor[23] (pid 58389)
SCSTTarget(Target_iscsi2)[58388]:	2012/09/10_15:26:45 DEBUG: Target_iscsi2 monitor : 0
Sep 10 15:26:45 Cluster-Server-1 lrmd: [48712]: info: operation monitor[21] on Target_iscsi2 for client 48715: pid 58388 exited with return code 0
SCSTLun(Lun_iscsi2)[58389]:	2012/09/10_15:26:45 INFO: Lun_iscsi2 monitor : 0
SCSTLun(Lun_iscsi2)[58389]:	2012/09/10_15:26:45 INFO: Lun_iscsi2 monitor : 0
Sep 10 15:26:45 Cluster-Server-1 lrmd: [48712]: info: operation monitor[23] on Lun_iscsi2 for client 48715: pid 58389 exited with return code 0
Sep 10 15:26:45 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_NFS_Server:1 monitor[8] (pid 2414)
Sep 10 15:26:45 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_iSCSI_Daemon:1 monitor[5] (pid 2415)
Sep 10 15:26:45 Cluster-Server-2 lrmd: [40194]: info: operation monitor[8] on p_NFS_Server:1 for client 40197: pid 2414 exited with return code 0
Sep 10 15:26:45 Cluster-Server-2 lrmd: [40194]: debug: RA output: (p_NFS_Server:1:monitor:stdout) nfsd running

Sep 10 15:26:45 Cluster-Server-2 lrmd: [40194]: info: operation monitor[5] on p_iSCSI_Daemon:1 for client 40197: pid 2415 exited with return code 0
Sep 10 15:26:46 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_PingD:0 monitor[6] (pid 58402)
Sep 10 15:26:46 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_PingD:1 monitor[6] (pid 2420)
Sep 10 15:26:48 Cluster-Server-1 attrd_updater: [58420]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:26:48 Cluster-Server-1 attrd_updater: [58420]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:26:48 Cluster-Server-1 attrd_updater: [58420]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:26:48 Cluster-Server-1 attrd_updater: [58420]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:26:48 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:26:48 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:26:48 Cluster-Server-1 lrmd: [48712]: info: operation monitor[6] on p_PingD:0 for client 48715: pid 58402 exited with return code 0
Sep 10 15:26:48 Cluster-Server-2 attrd_updater: [2476]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:26:48 Cluster-Server-2 attrd_updater: [2476]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:26:48 Cluster-Server-2 attrd_updater: [2476]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:26:48 Cluster-Server-2 attrd_updater: [2476]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:26:48 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:26:48 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:26:48 Cluster-Server-2 lrmd: [40194]: info: operation monitor[6] on p_PingD:1 for client 40197: pid 2420 exited with return code 0
Sep 10 15:26:48 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_Device_drive:1 monitor[11] (pid 2477)
drbd(p_Device_drive:1)[2477]:	2012/09/10_15:26:48 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:26:48 Cluster-Server-2 crm_attribute: [2564]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:26:48 Cluster-Server-2 crm_attribute: [2564]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:26:48 Cluster-Server-2 crm_attribute: [2564]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:26:48 Cluster-Server-2 crm_attribute: [2564]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:26:48 Cluster-Server-2 crm_attribute: [2564]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:26:48 Cluster-Server-2 crm_attribute: [2564]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:26:48 Cluster-Server-2 crm_attribute: [2564]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:26:48 Cluster-Server-2 crm_attribute: [2564]: info: determine_host: Mapped Cluster-Server-2 to Cluster-Server-2
Sep 10 15:26:48 Cluster-Server-2 crm_attribute: [2564]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:26:48 Cluster-Server-2 crm_attribute: [2564]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:26:48 Cluster-Server-2 crm_attribute: [2564]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:1=10000 for Cluster-Server-2
Sep 10 15:26:48 Cluster-Server-2 crm_attribute: [2564]: info: attrd_lazy_update: Updated master-p_Device_drive:1=10000 for Cluster-Server-2
Sep 10 15:26:48 Cluster-Server-2 crm_attribute: [2564]: info: main: Update master-p_Device_drive:1=10000 sent via attrd
Sep 10 15:26:48 Cluster-Server-2 crm_attribute: [2564]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:26:48 Cluster-Server-2 crm_attribute: [2564]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:26:48 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:1=10000
Sep 10 15:26:48 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:1)[2477]:	2012/09/10_15:26:48 DEBUG: drive: Exit code 0
drbd(p_Device_drive:1)[2477]:	2012/09/10_15:26:48 DEBUG: drive: Command output: 
Sep 10 15:26:48 Cluster-Server-2 lrmd: [40194]: debug: RA output: (p_Device_drive:1:monitor:stdout) 

Sep 10 15:26:48 Cluster-Server-2 lrmd: [40194]: info: operation monitor[11] on p_Device_drive:1 for client 40197: pid 2477 exited with return code 0
Sep 10 15:26:49 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_Device_drive:0 monitor[11] (pid 58421)
drbd(p_Device_drive:0)[58421]:	2012/09/10_15:26:49 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:26:49 Cluster-Server-1 crm_attribute: [58451]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:26:49 Cluster-Server-1 crm_attribute: [58451]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:26:49 Cluster-Server-1 crm_attribute: [58451]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:26:49 Cluster-Server-1 crm_attribute: [58451]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:26:49 Cluster-Server-1 crm_attribute: [58451]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:26:49 Cluster-Server-1 crm_attribute: [58451]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:26:49 Cluster-Server-1 crm_attribute: [58451]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:26:49 Cluster-Server-1 crm_attribute: [58451]: info: determine_host: Mapped Cluster-Server-1 to Cluster-Server-1
Sep 10 15:26:49 Cluster-Server-1 crm_attribute: [58451]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:26:49 Cluster-Server-1 crm_attribute: [58451]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:26:49 Cluster-Server-1 crm_attribute: [58451]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:26:49 Cluster-Server-1 crm_attribute: [58451]: info: attrd_lazy_update: Updated master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:26:49 Cluster-Server-1 crm_attribute: [58451]: info: main: Update master-p_Device_drive:0=10000 sent via attrd
Sep 10 15:26:49 Cluster-Server-1 crm_attribute: [58451]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:26:49 Cluster-Server-1 crm_attribute: [58451]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:26:49 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:0=10000
Sep 10 15:26:49 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:0)[58421]:	2012/09/10_15:26:49 DEBUG: drive: Exit code 0
drbd(p_Device_drive:0)[58421]:	2012/09/10_15:26:49 DEBUG: drive: Command output: 
Sep 10 15:26:49 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_Device_drive:0:monitor:stdout) 

Sep 10 15:26:49 Cluster-Server-1 lrmd: [48712]: info: operation monitor[11] on p_Device_drive:0 for client 48715: pid 58421 exited with return code 8
Sep 10 15:26:50 Cluster-Server-1 lrmd: [48712]: debug: rsc:FS_nfs1 monitor[37] (pid 58720)
Sep 10 15:26:50 Cluster-Server-1 lrmd: [48712]: info: operation monitor[37] on FS_nfs1 for client 48715: pid 58720 exited with return code 0
Sep 10 15:26:51 Cluster-Server-1 lrmd: [48712]: debug: rsc:ExportFS_nfs1 monitor[39] (pid 58752)
exportfs(ExportFS_nfs1)[58752]:	2012/09/10_15:26:51 INFO: Directory /volumes/nfs1 is exported to * (started).
exportfs(ExportFS_nfs1)[58752]:	2012/09/10_15:26:51 INFO: Directory /volumes/nfs1 is exported to * (started).
Sep 10 15:26:51 Cluster-Server-1 lrmd: [48712]: info: operation monitor[39] on ExportFS_nfs1 for client 48715: pid 58752 exited with return code 0
Sep 10 15:26:54 Cluster-Server-1 lrmd: [48712]: debug: rsc:Target_iscsi3 monitor[31] (pid 58836)
Sep 10 15:26:54 Cluster-Server-1 lrmd: [48712]: debug: rsc:Lun_iscsi3 monitor[33] (pid 58837)
SCSTTarget(Target_iscsi3)[58836]:	2012/09/10_15:26:54 DEBUG: Target_iscsi3 monitor : 0
Sep 10 15:26:54 Cluster-Server-1 lrmd: [48712]: info: operation monitor[31] on Target_iscsi3 for client 48715: pid 58836 exited with return code 0
SCSTLun(Lun_iscsi3)[58837]:	2012/09/10_15:26:54 INFO: Lun_iscsi3 monitor : 0
SCSTLun(Lun_iscsi3)[58837]:	2012/09/10_15:26:54 INFO: Lun_iscsi3 monitor : 0
Sep 10 15:26:54 Cluster-Server-1 lrmd: [48712]: info: operation monitor[33] on Lun_iscsi3 for client 48715: pid 58837 exited with return code 0
Sep 10 15:26:55 Cluster-Server-1 lrmd: [48712]: debug: rsc:Target_iscsi2 monitor[21] (pid 58861)
Sep 10 15:26:55 Cluster-Server-1 lrmd: [48712]: debug: rsc:Lun_iscsi2 monitor[23] (pid 58862)
SCSTTarget(Target_iscsi2)[58861]:	2012/09/10_15:26:55 DEBUG: Target_iscsi2 monitor : 0
Sep 10 15:26:55 Cluster-Server-1 lrmd: [48712]: info: operation monitor[21] on Target_iscsi2 for client 48715: pid 58861 exited with return code 0
SCSTLun(Lun_iscsi2)[58862]:	2012/09/10_15:26:55 INFO: Lun_iscsi2 monitor : 0
SCSTLun(Lun_iscsi2)[58862]:	2012/09/10_15:26:55 INFO: Lun_iscsi2 monitor : 0
Sep 10 15:26:55 Cluster-Server-1 lrmd: [48712]: info: operation monitor[23] on Lun_iscsi2 for client 48715: pid 58862 exited with return code 0
Sep 10 15:26:58 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_PingD:0 monitor[6] (pid 58907)
Sep 10 15:26:58 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_PingD:1 monitor[6] (pid 3469)
Sep 10 15:26:59 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_Device_drive:0 monitor[11] (pid 58944)
drbd(p_Device_drive:0)[58944]:	2012/09/10_15:26:59 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:26:59 Cluster-Server-1 crm_attribute: [58974]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:26:59 Cluster-Server-1 crm_attribute: [58974]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:26:59 Cluster-Server-1 crm_attribute: [58974]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:26:59 Cluster-Server-1 crm_attribute: [58974]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:26:59 Cluster-Server-1 crm_attribute: [58974]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:26:59 Cluster-Server-1 crm_attribute: [58974]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:26:59 Cluster-Server-1 crm_attribute: [58974]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:26:59 Cluster-Server-1 crm_attribute: [58974]: info: determine_host: Mapped Cluster-Server-1 to Cluster-Server-1
Sep 10 15:26:59 Cluster-Server-1 crm_attribute: [58974]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:26:59 Cluster-Server-1 crm_attribute: [58974]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:26:59 Cluster-Server-1 crm_attribute: [58974]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:26:59 Cluster-Server-1 crm_attribute: [58974]: info: attrd_lazy_update: Updated master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:26:59 Cluster-Server-1 crm_attribute: [58974]: info: main: Update master-p_Device_drive:0=10000 sent via attrd
Sep 10 15:26:59 Cluster-Server-1 crm_attribute: [58974]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:26:59 Cluster-Server-1 crm_attribute: [58974]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:26:59 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:0=10000
Sep 10 15:26:59 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:0)[58944]:	2012/09/10_15:26:59 DEBUG: drive: Exit code 0
drbd(p_Device_drive:0)[58944]:	2012/09/10_15:26:59 DEBUG: drive: Command output: 
Sep 10 15:26:59 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_Device_drive:0:monitor:stdout) 

Sep 10 15:26:59 Cluster-Server-1 lrmd: [48712]: info: operation monitor[11] on p_Device_drive:0 for client 48715: pid 58944 exited with return code 8
Sep 10 15:27:00 Cluster-Server-1 attrd_updater: [58994]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:27:00 Cluster-Server-1 attrd_updater: [58994]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:27:00 Cluster-Server-1 attrd_updater: [58994]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:27:00 Cluster-Server-1 attrd_updater: [58994]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:27:00 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:27:00 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:27:00 Cluster-Server-1 lrmd: [48712]: info: operation monitor[6] on p_PingD:0 for client 48715: pid 58907 exited with return code 0
Sep 10 15:27:00 Cluster-Server-2 attrd_updater: [3655]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:27:00 Cluster-Server-2 attrd_updater: [3655]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:27:00 Cluster-Server-2 attrd_updater: [3655]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:27:00 Cluster-Server-2 attrd_updater: [3655]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:27:00 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:27:00 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:27:00 Cluster-Server-2 lrmd: [40194]: info: operation monitor[6] on p_PingD:1 for client 40197: pid 3469 exited with return code 0
Sep 10 15:27:01 Cluster-Server-1 lrmd: [48712]: debug: rsc:ExportFS_nfs1 monitor[39] (pid 59016)
exportfs(ExportFS_nfs1)[59016]:	2012/09/10_15:27:01 INFO: Directory /volumes/nfs1 is exported to * (started).
exportfs(ExportFS_nfs1)[59016]:	2012/09/10_15:27:01 INFO: Directory /volumes/nfs1 is exported to * (started).
Sep 10 15:27:01 Cluster-Server-1 lrmd: [48712]: info: operation monitor[39] on ExportFS_nfs1 for client 48715: pid 59016 exited with return code 0
Sep 10 15:27:04 Cluster-Server-1 lrmd: [48712]: debug: rsc:Target_iscsi3 monitor[31] (pid 59082)
Sep 10 15:27:04 Cluster-Server-1 lrmd: [48712]: debug: rsc:Lun_iscsi3 monitor[33] (pid 59088)
SCSTTarget(Target_iscsi3)[59082]:	2012/09/10_15:27:04 DEBUG: Target_iscsi3 monitor : 0
Sep 10 15:27:04 Cluster-Server-1 lrmd: [48712]: info: operation monitor[31] on Target_iscsi3 for client 48715: pid 59082 exited with return code 0
SCSTLun(Lun_iscsi3)[59088]:	2012/09/10_15:27:04 INFO: Lun_iscsi3 monitor : 0
SCSTLun(Lun_iscsi3)[59088]:	2012/09/10_15:27:04 INFO: Lun_iscsi3 monitor : 0
Sep 10 15:27:04 Cluster-Server-1 lrmd: [48712]: info: operation monitor[33] on Lun_iscsi3 for client 48715: pid 59088 exited with return code 0
Sep 10 15:27:05 Cluster-Server-1 lrmd: [48712]: debug: rsc:Target_iscsi2 monitor[21] (pid 59119)
Sep 10 15:27:05 Cluster-Server-1 lrmd: [48712]: debug: rsc:Lun_iscsi2 monitor[23] (pid 59120)
SCSTTarget(Target_iscsi2)[59119]:	2012/09/10_15:27:05 DEBUG: Target_iscsi2 monitor : 0
Sep 10 15:27:05 Cluster-Server-1 lrmd: [48712]: info: operation monitor[21] on Target_iscsi2 for client 48715: pid 59119 exited with return code 0
SCSTLun(Lun_iscsi2)[59120]:	2012/09/10_15:27:05 INFO: Lun_iscsi2 monitor : 0
SCSTLun(Lun_iscsi2)[59120]:	2012/09/10_15:27:05 INFO: Lun_iscsi2 monitor : 0
Sep 10 15:27:05 Cluster-Server-1 lrmd: [48712]: info: operation monitor[23] on Lun_iscsi2 for client 48715: pid 59120 exited with return code 0
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [59163] registered
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:59163] disconnected.
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:59163] is unregistered
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [59165] registered
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:59165] disconnected.
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:59165] is unregistered
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [59167] registered
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:59167] disconnected.
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:59167] is unregistered
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [59169] registered
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:59169] disconnected.
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:59169] is unregistered
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [59178] registered
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:59178] disconnected.
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:59178] is unregistered
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [59187] registered
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:59187] disconnected.
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:59187] is unregistered
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [59194] registered
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:59194] disconnected.
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:59194] is unregistered
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [59201] registered
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:59201] disconnected.
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:59201] is unregistered
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [59208] registered
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:59208] disconnected.
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:59208] is unregistered
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [59216] registered
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:59216] disconnected.
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:59216] is unregistered
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: crm_uptime: Current CPU usage is: 0s, 80000us
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: crm_compare_age: Loose: 80000 vs 430000 (usec)
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: info: do_election_count_vote: Election 12 (owner: Cluster-Server-2) lost: vote from Cluster-Server-2 (Uptime)
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: update_dc: Unset DC. Was Cluster-Server-2
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: do_election_check: Ignore election check: we not in an election
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_PENDING: [ state=S_NOT_DC cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: do_log: FSA: Input I_PENDING from do_election_count_vote() received in state S_NOT_DC
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: notice: do_state_transition: State transition S_NOT_DC -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_START
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: crm_timer_start: Started Election Trigger (I_DC_TIMEOUT:20000ms), src=76
Sep 10 15:27:06 Cluster-Server-1 cib: [48709]: debug: activateCibXml: Triggering CIB write for cib_apply_diff op
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: handle_request: Raising I_JOIN_OFFER: join-10
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_JOIN_OFFER: [ state=S_PENDING cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_REQUEST
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: info: update_dc: Set DC to Cluster-Server-2 (3.0.6)
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: do_cl_join_offer_respond: do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:06 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='master-p_Device_drive:1'] does not exist
Sep 10 15:27:06 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update -22 for master-p_Device_drive:1=(null) passed
Sep 10 15:27:06 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:27:06 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-pingd" name="pingd" value="100" />
Sep 10 15:27:06 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:27:06 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-probe_complete" name="probe_complete" value="true" />
Sep 10 15:27:06 Cluster-Server-1 cib: [48709]: debug: Forking temp process write_cib_contents
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: join_query_callback: Respond to join offer join-10
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: join_query_callback: Acknowledging Cluster-Server-2 as our DC
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: handle_request: Raising I_JOIN_RESULT: join-10
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_PENDING cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: do_cl_join_finalize_respond: Confirming join join-10: join_ack_nack
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce ExportFS_nfs1 after complete start op (interval=0)
Sep 10 15:27:06 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 141 for pingd=100 passed
Sep 10 15:27:06 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 143 for probe_complete=true passed
Sep 10 15:27:06 Cluster-Server-1 cib: [48709]: debug: activateCibXml: Triggering CIB write for cib_replace op
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce ExportFS_nfs1 after complete monitor op (interval=10000)
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_NFS_Server:0 after complete start op (interval=0)
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_NFS_Server:0 after complete monitor op (interval=30000)
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce FS_nfs1 after complete start op (interval=0)
Sep 10 15:27:06 Cluster-Server-1 cib: [48709]: info: Managed write_cib_contents process 59238 exited with return code 0.
Sep 10 15:27:06 Cluster-Server-1 cib: [48709]: debug: Forking temp process write_cib_contents
Sep 10 15:27:06 Cluster-Server-1 cib: [48709]: debug: xmlfromIPC: Peer disconnected
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce FS_nfs1 after complete monitor op (interval=20000)
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce LVM_drive after complete monitor op (interval=0)
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Lun_iscsi1 after complete monitor op (interval=0)
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_PingD:0 after complete monitor op (interval=0)
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_PingD:0 after complete monitor op (interval=10000)
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Lun_iscsi2 after complete start op (interval=0)
Sep 10 15:27:06 Cluster-Server-1 cib: [48709]: info: Managed write_cib_contents process 59250 exited with return code 0.
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Lun_iscsi2 after complete monitor op (interval=10000)
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_Device_drive:0 after complete monitor op (interval=0)
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_Device_drive:0 after complete monitor op (interval=10000)
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_iSCSI_Daemon:0 after complete monitor op (interval=0)
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_iSCSI_Daemon:0 after complete monitor op (interval=30000)
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Target_iscsi2 after complete start op (interval=0)
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Target_iscsi2 after complete monitor op (interval=10000)
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Target_iscsi3 after complete start op (interval=0)
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Target_iscsi3 after complete monitor op (interval=10000)
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Target_iscsi1 after complete monitor op (interval=0)
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Lun_iscsi3 after complete start op (interval=0)
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Lun_iscsi3 after complete monitor op (interval=10000)
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: do_cl_join_finalize_respond: join-10: Join complete.  Sending local LRM status to Cluster-Server-2
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: attrd_update_delegate: Sent update: (null)=(null) for localhost
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_NOT_DC: [ state=S_PENDING cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: do_log: FSA: Input I_NOT_DC from do_cl_join_finalize_respond() received in state S_PENDING
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: notice: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:27:06 Cluster-Server-1 attrd: [48713]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
Sep 10 15:27:06 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_Device_drive:0 (10000)
Sep 10 15:27:06 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='master-p_Device_drive:0'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[3])
Sep 10 15:27:06 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-master-p_Device_drive.0" name="master-p_Device_drive:0" value="10000" />
Sep 10 15:27:06 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: pingd (100)
Sep 10 15:27:06 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Sep 10 15:27:06 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:27:06 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-probe_complete" name="probe_complete" value="true" />
Sep 10 15:27:06 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:27:06 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-pingd" name="pingd" value="100" />
Sep 10 15:27:06 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 145 for master-p_Device_drive:0=10000 passed
Sep 10 15:27:06 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 147 for probe_complete=true passed
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: debug: on_msg_add_rsc:client [48715] adds resource FS_nfs2
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=18:17:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=FS_nfs2_monitor_0
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op:2399: copying parameters for rsc FS_nfs2
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation monitor[40] on FS_nfs2 for client 48715, its parameters: crm_feature_set=[3.0.6] device=[/dev/drive-CSD/nfs2_NFS] directory=[/volumes/nfs2] force_clones=[false] fstype=[xfs] CRM_meta_timeout=[20000]  to the operation list.
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: info: rsc:FS_nfs2 probe[40] (pid 59265)
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: debug: on_msg_add_rsc:client [48715] adds resource ExportFS_nfs2
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=19:17:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=ExportFS_nfs2_monitor_0
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op:2399: copying parameters for rsc ExportFS_nfs2
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation monitor[41] on ExportFS_nfs2 for client 48715, its parameters: crm_feature_set=[3.0.6] options=[rw,insecure,async,no_subtree_check,root_squash,no_all_squash,anonuid=1directory=[/volumes/nfs2] fsid=[2fbaecfe-fb4b-11e1-a319-000c290247c7] clientspec=[*] CRM_meta_timeout=[20000]  to the operation list.
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: info: rsc:ExportFS_nfs2 probe[41] (pid 59266)
exportfs(ExportFS_nfs2)[59266]:	2012/09/10_15:27:06 INFO: Directory /volumes/nfs2 is not exported to * (stopped).
exportfs(ExportFS_nfs2)[59266]:	2012/09/10_15:27:06 INFO: Directory /volumes/nfs2 is not exported to * (stopped).
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: WARN: Managed ExportFS_nfs2:monitor process 59266 exited with return code 7.
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: info: operation monitor[41] on ExportFS_nfs2 for client 48715: pid 59266 exited with return code 7
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce ExportFS_nfs2 after complete monitor op (interval=0)
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation ExportFS_nfs2_monitor_0 (call=41, rc=7, cib-update=83, confirmed=true) not running
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending monitor op to history for 'ExportFS_nfs2'
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: WARN: Managed FS_nfs2:monitor process 59265 exited with return code 7.
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: info: operation monitor[40] on FS_nfs2 for client 48715: pid 59265 exited with return code 7
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce FS_nfs2 after complete monitor op (interval=0)
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation FS_nfs2_monitor_0 (call=40, rc=7, cib-update=84, confirmed=true) not running
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending monitor op to history for 'FS_nfs2'
Sep 10 15:27:06 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='master-p_Device_drive:1'] does not exist
Sep 10 15:27:06 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update -22 for master-p_Device_drive:1=(null) passed
Sep 10 15:27:06 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:27:06 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-pingd" name="pingd" value="100" />
Sep 10 15:27:06 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:27:06 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-probe_complete" name="probe_complete" value="true" />
Sep 10 15:27:06 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 149 for pingd=100 passed
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: attrd_update_delegate: Sent update: probe_complete=true for localhost
Sep 10 15:27:06 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crmd: probe_complete=true
Sep 10 15:27:06 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: true, Current: true, Stored: true
Sep 10 15:27:06 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 152 for pingd=100 passed
Sep 10 15:27:06 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 154 for probe_complete=true passed
Sep 10 15:27:06 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=100:18:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=FS_nfs2_start_0
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op:2399: copying parameters for rsc FS_nfs2
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation start[42] on FS_nfs2 for client 48715, its parameters: crm_feature_set=[3.0.6] device=[/dev/drive-CSD/nfs2_NFS] directory=[/volumes/nfs2] CRM_meta_name=[start] force_clones=[false] CRM_meta_timeout=[60000] fstype=[xfs]  to the operation list.
Sep 10 15:27:06 Cluster-Server-1 lrmd: [48712]: info: rsc:FS_nfs2 start[42] (pid 59309)
Filesystem(FS_nfs2)[59309]:	2012/09/10_15:27:06 INFO: Running start for /dev/drive-CSD/nfs2_NFS on /volumes/nfs2
Filesystem(FS_nfs2)[59309]:	2012/09/10_15:27:06 INFO: Running start for /dev/drive-CSD/nfs2_NFS on /volumes/nfs2
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: debug: activateCibXml: Triggering CIB write for cib_replace op
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_replace): 0.14.39 -> 0.15.1 (S_IDLE)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.15.1) : Non-status change
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause <diff crm_feature_set="3.0.6" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   <diff-removed admin_epoch="0" epoch="14" num_updates="39" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     <cib admin_epoch="0" epoch="14" num_updates="39" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   </diff-removed>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   <diff-added >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     <cib epoch="15" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="Cluster-Server-1" update-client="cibadmin" cib-last-written="Mon Sep 10 15:26:28 2012" have-quorum="1" dc-uuid="Cluster-Server-2" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       <configuration >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         <resources >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <group id="NFS_nfs2" __crm_diff_marker__="added:top" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <primitive class="ocf" id="FS_nfs2" provider="nas" type="Filesystem" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <instance_attributes id="FS_nfs2-instance_attributes" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="FS_nfs2-instance_attributes-device" name="device" value="/dev/drive-CSD/nfs2_NFS" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="FS_nfs2-instance_attributes-directory" name="directory" value="/volumes/nfs2" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="FS_nfs2-instance_attributes-fstype" name="fstype" value="xfs" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="FS_nfs2-instance_attributes-force_clones" name="force_clones" value="false" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </instance_attributes>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <operations >
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib_replace_notify: Replaced: 0.14.39 -> 0.15.1 from Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="FS_nfs2-start-0" interval="0" name="start" timeout="60" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="FS_nfs2-stop-0" interval="0" name="stop" timeout="60" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="FS_nfs2-monitor-20" interval="20" name="monitor" timeout="40" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </operations>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <meta_attributes id="FS_nfs2-meta_attributes" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="FS_nfs2-meta_attributes-resource-stickiness" name="resource-stickiness" value="0" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="FS_nfs2-meta_attributes-target-role" name="target-role" value="Started" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </meta_attributes>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             </primitive>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <primitive class="ocf" id="ExportFS_nfs2" provider="nas" type="exportfs" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <instance_attributes id="ExportFS_nfs2-instance_attributes" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="ExportFS_nfs2-instance_attributes-fsid" name="fsid" value="2fbaecfe-fb4b-11e1-a319-000c290247c7" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="ExportFS_nfs2-instance_attributes-directory" name="directory" value="/volumes/nfs2" />
Sep 10 15:27:06 Cluster-Server-2 attrd: [40195]: info: do_cib_replaced: Sending full refresh
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="ExportFS_nfs2-instance_attributes-options" name="options" value="rw,insecure,async,no_subtree_check,root_squash,no_all_squash,anonuid=1000,anongid=100" />
Sep 10 15:27:06 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_Device_drive:1 (10000)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="ExportFS_nfs2-instance_attributes-clientspec" name="clientspec" value="*" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </instance_attributes>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <operations >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="ExportFS_nfs2-start-0" interval="0" name="start" timeout="40" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="ExportFS_nfs2-stop-0" interval="0" name="stop" timeout="10" />
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: - <cib admin_epoch="0" epoch="14" num_updates="39" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="ExportFS_nfs2-monitor-10" interval="10" name="monitor" timeout="20" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </operations>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <meta_attributes id="ExportFS_nfs2-meta_attributes" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="ExportFS_nfs2-meta_attributes-resource-stickiness" name="resource-stickiness" value="0" />
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: + <cib epoch="15" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="Cluster-Server-1" update-client="cibadmin" cib-last-written="Mon Sep 10 15:26:28 2012" have-quorum="1" dc-uuid="Cluster-Server-2" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="ExportFS_nfs2-meta_attributes-target-role" name="target-role" value="Started" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </meta_attributes>
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +   <configuration >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             </primitive>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           </group>
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +     <resources >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         </resources>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         <constraints >
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <group id="NFS_nfs2" __crm_diff_marker__="added:top" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <rsc_order first="LVM_drive" id="NFS_nfs2_after_LVM_drive" score="INFINITY" then="NFS_nfs2" __crm_diff_marker__="added:top" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <rsc_order first="NFS_Server" id="NFS_nfs2_after_NFS_Server" score="INFINITY" then="NFS_nfs2" __crm_diff_marker__="added:top" />
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +         <primitive class="ocf" id="FS_nfs2" provider="nas" type="Filesystem" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <rsc_colocation id="NFS_nfs2_with_LVM_drive" rsc="NFS_nfs2" score="INFINITY" with-rsc="LVM_drive" __crm_diff_marker__="added:top" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <rsc_colocation id="NFS_nfs2_with_NFS_Server" rsc="NFS_nfs2" score="INFINITY" with-rsc="NFS_Server" __crm_diff_marker__="added:top" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         </constraints>
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <instance_attributes id="FS_nfs2-instance_attributes" >
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="FS_nfs2-instance_attributes-device" name="device" value="/dev/drive-CSD/nfs2_NFS" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       </configuration>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     </cib>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   </diff-added>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause </diff>
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="FS_nfs2-instance_attributes-directory" name="directory" value="/volumes/nfs2" />
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="FS_nfs2-instance_attributes-fstype" name="fstype" value="xfs" />
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="FS_nfs2-instance_attributes-force_clones" name="force_clones" value="false" />
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +           </instance_attributes>
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <operations >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_cib_replaced: Updating the CIB after a replace: DC=true
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <op id="FS_nfs2-start-0" interval="0" name="start" timeout="60" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-1: true (overwrite=true) hash_size=2
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <op id="FS_nfs2-stop-0" interval="0" name="stop" timeout="60" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-2: true (overwrite=true) hash_size=2
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <op id="FS_nfs2-monitor-20" interval="20" name="monitor" timeout="40" />
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +           </operations>
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <meta_attributes id="FS_nfs2-meta_attributes" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_IDLE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="FS_nfs2-meta_attributes-resource-stickiness" name="resource-stickiness" value="0" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="FS_nfs2-meta_attributes-target-role" name="target-role" value="Started" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes are eligible to run resources.
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +           </meta_attributes>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +         </primitive>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +         <primitive class="ocf" id="ExportFS_nfs2" provider="nas" type="exportfs" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <instance_attributes id="ExportFS_nfs2-instance_attributes" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 258: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="ExportFS_nfs2-instance_attributes-fsid" name="fsid" value="2fbaecfe-fb4b-11e1-a319-000c290247c7" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: update_dc: Unset DC. Was Cluster-Server-2
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: crm_uptime: Current CPU usage is: 0s, 430000us
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_election_vote: Started election 12
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started Election Timeout (I_ELECTION_DC:120000ms), src=305
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_election_count_vote: Created voted hash
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: crm_uptime: Current CPU usage is: 0s, 430000us
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_election_count_vote: Election 12 (current: 12, owner: Cluster-Server-2): Processed vote from Cluster-Server-2 (Recorded)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="ExportFS_nfs2-instance_attributes-directory" name="directory" value="/volumes/nfs2" />
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="ExportFS_nfs2-instance_attributes-options" name="options" value="rw,insecure,async,no_subtree_check,root_squash,no_all_squash,anonuid=1000,anongid=100" />
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="ExportFS_nfs2-instance_attributes-clientspec" name="clientspec" value="*" />
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +           </instance_attributes>
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <operations >
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <op id="ExportFS_nfs2-start-0" interval="0" name="start" timeout="40" />
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <op id="ExportFS_nfs2-stop-0" interval="0" name="stop" timeout="10" />
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <op id="ExportFS_nfs2-monitor-10" interval="10" name="monitor" timeout="20" />
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +           </operations>
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <meta_attributes id="ExportFS_nfs2-meta_attributes" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: crm_uptime: Current CPU usage is: 0s, 430000us
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: crm_compare_age: Win: 430000 vs 0  (usec)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_election_count_vote: Election 12 (current: 12, owner: Cluster-Server-2): Processed no-vote from Cluster-Server-1 (Recorded)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_election_check: Destroying voted hash
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_ELECTION_DC: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_election_check ]
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="ExportFS_nfs2-meta_attributes-resource-stickiness" name="resource-stickiness" value="0" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_ELECTION_DC from do_election_check() received in state S_ELECTION
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_START
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_te_control: The transitioner is already active
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_START
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pengine
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +             <nvpair id="ExportFS_nfs2-meta_attributes-target-role" name="target-role" value="Started" />
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +           </meta_attributes>
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +         </primitive>
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +       </group>
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +     </resources>
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +     <constraints >
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <rsc_order first="LVM_drive" id="NFS_nfs2_after_LVM_drive" score="INFINITY" then="NFS_nfs2" __crm_diff_marker__="added:top" />
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <rsc_order first="NFS_Server" id="NFS_nfs2_after_NFS_Server" score="INFINITY" then="NFS_nfs2" __crm_diff_marker__="added:top" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_START
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started Integration Timer (I_INTEGRATED:180000ms), src=307
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TAKEOVER
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: info: do_dc_takeover: Taking over DC status for this partition
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <rsc_colocation id="NFS_nfs2_with_LVM_drive" rsc="NFS_nfs2" score="INFINITY" with-rsc="LVM_drive" __crm_diff_marker__="added:top" />
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <rsc_colocation id="NFS_nfs2_with_NFS_Server" rsc="NFS_nfs2" score="INFINITY" with-rsc="NFS_Server" __crm_diff_marker__="added:top" />
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +     </constraints>
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: +   </configuration>
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib:diff: + </cib>
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=Cluster-Server-1/cibadmin/2, version=0.15.1): ok (rc=0)
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/256, version=0.15.2): ok (rc=0)
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: debug: cib_process_readwrite: We are still in R/W mode
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/259, version=0.15.4): ok (rc=0)
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/260, version=0.15.5): ok (rc=0)
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='master-p_Device_drive:1'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[3])
Sep 10 15:27:06 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-master-p_Device_drive.1" name="master-p_Device_drive:1" value="10000" />
Sep 10 15:27:06 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: pingd (100)
Sep 10 15:27:06 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:27:06 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-probe_complete" name="probe_complete" value="true" />
Sep 10 15:27:06 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 141 for master-p_Device_drive:1=10000 passed
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:27:06 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-pingd" name="pingd" value="100" />
Sep 10 15:27:06 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 143 for probe_complete=true passed
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//cluster_property_set//nvpair[@name='dc-version'] (/cib/configuration/crm_config/cluster_property_set/nvpair[1])
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: find_nvpair_attr_delegate: Match <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff" />
Sep 10 15:27:06 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 145 for pingd=100 passed
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/262, version=0.15.9): ok (rc=0)
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//cluster_property_set//nvpair[@name='cluster-infrastructure'] (/cib/configuration/crm_config/cluster_property_set/nvpair[2])
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: find_nvpair_attr_delegate: Match <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="cman" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_OFFER_ALL
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: initialize_join: join-10: Initializing join data (flag=true)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: join_make_offer: join-10: Sending offer to Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: join_make_offer: join-10: Sending offer to Cluster-Server-2
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: info: do_dc_join_offer_all: join-10: Waiting on 2 outstanding join acks
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke_callback: Discarding PE request in state: S_INTEGRATION
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: handle_request: Raising I_JOIN_OFFER: join-10
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_OFFER: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_REQUEST
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: info: update_dc: Set DC to Cluster-Server-2 (3.0.6)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_cl_join_offer_respond: do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/264, version=0.15.10): ok (rc=0)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Call 265 : Parsing CIB options
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Shutdown escalation occurs after: 1200000ms
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Checking for expired actions every 900000ms
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: debug: Forking temp process write_cib_contents
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: join_query_callback: Respond to join offer join-10
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: join_query_callback: Acknowledging Cluster-Server-2 as our DC
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_REQ
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: Processing req from Cluster-Server-2
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: join-10: Welcoming node Cluster-Server-2 (ref join_request-crmd-1347283626-162)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: 1 nodes have been integrated into join-10
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: join-10: Still waiting on 1 outstanding offers
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: Managed write_cib_contents process 4410 exited with return code 0.
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_REQ
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: Processing req from Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: join-10: Welcoming node Cluster-Server-1 (ref join_request-crmd-1347283626-34)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: 2 nodes have been integrated into join-10
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: check_join_state: join-10: Integration of 2 peers complete: do_dc_join_filter_offer
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_INTEGRATED: [ state=S_INTEGRATION cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes responded to the join offer.
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_START
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started Finalization Timer (I_ELECTION:1800000ms), src=311
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_FINALIZE
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_finalize: Finializing join-10 for 2 clients
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: info: do_dc_join_finalize: join-10: Syncing the CIB from Cluster-Server-2 to the rest of the cluster
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: debug: sync_our_cib: Syncing CIB to all peers
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/267, version=0.15.12): ok (rc=0)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by finalize_sync_callback in state: S_FINALIZE_JOIN
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: check_join_state: join-10: Still waiting on 2 integrated nodes
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: finalize_sync_callback: Notifying 2 clients of join-10 results
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: finalize_join_for: join-10: ACK'ing join request from Cluster-Server-1, state member
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: finalize_join_for: join-10: ACK'ing join request from Cluster-Server-2, state member
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: handle_request: Raising I_JOIN_RESULT: join-10
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_cl_join_finalize_respond: Confirming join join-10: join_ack_nack
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce Target_iscsi3 after complete monitor op (interval=0)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce FS_nfs1 after complete monitor op (interval=0)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_NFS_Server:1 after complete start op (interval=0)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_NFS_Server:1 after complete monitor op (interval=30000)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce LVM_drive after complete monitor op (interval=0)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce Lun_iscsi1 after complete monitor op (interval=0)
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/268, version=0.15.13): ok (rc=0)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce Lun_iscsi2 after complete monitor op (interval=0)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_PingD:1 after complete monitor op (interval=0)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_PingD:1 after complete monitor op (interval=10000)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_Device_drive:1 after complete monitor op (interval=0)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_Device_drive:1 after complete monitor op (interval=20000)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce Target_iscsi1 after complete monitor op (interval=0)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce Target_iscsi2 after complete monitor op (interval=0)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_iSCSI_Daemon:1 after complete monitor op (interval=0)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_iSCSI_Daemon:1 after complete monitor op (interval=30000)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce ExportFS_nfs1 after complete monitor op (interval=0)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce Lun_iscsi3 after complete monitor op (interval=0)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_cl_join_finalize_respond: join-10: Join complete.  Sending local LRM status to Cluster-Server-2
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_ack: Ignoring op=join_ack_nack message from Cluster-Server-2
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/269, version=0.15.14): ok (rc=0)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: info: do_dc_join_ack: join-10: Updating node state to member for Cluster-Server-2
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: info: erase_status_tag: Deleting xpath: //node_state[@uname='Cluster-Server-2']/lrm
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_ack: join-10: Registered callback for LRM update 271
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-2']/lrm (/cib/status/node_state[2]/lrm)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-2']/lrm (origin=local/crmd/270, version=0.15.15): ok (rc=0)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: erase_xpath_callback: Deletion of "//node_state[@uname='Cluster-Server-2']/lrm": ok (rc=0)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: join_update_complete_callback: Join update 271 complete
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by join_update_complete_callback in state: S_FINALIZE_JOIN
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: check_join_state: join-10: Still waiting on 1 finalized nodes
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: info: do_dc_join_ack: join-10: Updating node state to member for Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: info: erase_status_tag: Deleting xpath: //node_state[@uname='Cluster-Server-1']/lrm
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_ack: join-10: Registered callback for LRM update 273
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-1']/lrm (/cib/status/node_state[1]/lrm)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-1']/lrm (origin=local/crmd/272, version=0.15.17): ok (rc=0)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: erase_xpath_callback: Deletion of "//node_state[@uname='Cluster-Server-1']/lrm": ok (rc=0)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: join_update_complete_callback: Join update 273 complete
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by join_update_complete_callback in state: S_FINALIZE_JOIN
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: check_join_state: join-10 complete: join_update_complete_callback
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_FINALIZED: [ state=S_FINALIZE_JOIN cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-1: true (overwrite=true) hash_size=2
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-2: true (overwrite=true) hash_size=2
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes are eligible to run resources.
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_FINAL
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_final: Ensuring DC, quorum and node attributes are up-to-date
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: attrd_update_delegate: Sent update: (null)=(null) for localhost
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: crm_update_quorum: Updating quorum status to true (call=276)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_CANCEL
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_te_invoke: Cancelling the transition: inactive
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: do_te_invoke:162 - Triggered transition abort (complete=1) : Peer Cancelled
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 277: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.15.18 -> 0.15.19 (S_POLICY_ENGINE)
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/274, version=0.15.19): ok (rc=0)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.15.19 -> 0.15.20 (S_POLICY_ENGINE)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.15.20 -> 0.15.21 (S_POLICY_ENGINE)
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/276, version=0.15.21): ok (rc=0)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.15.21 -> 0.15.22 (S_POLICY_ENGINE)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.15.22 -> 0.15.23 (S_POLICY_ENGINE)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.15.23 -> 0.15.24 (S_POLICY_ENGINE)
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='master-p_Device_drive:0'] does not exist
Sep 10 15:27:06 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update -22 for master-p_Device_drive:0=(null) passed
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:27:06 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-pingd" name="pingd" value="100" />
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:27:06 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-probe_complete" name="probe_complete" value="true" />
Sep 10 15:27:06 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 148 for pingd=100 passed
Sep 10 15:27:06 Cluster-Server-2 attrd: [40195]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
Sep 10 15:27:06 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_Device_drive:1 (10000)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: unpack_config: Startup probes: enabled
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH timeout: 60000
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH of failed nodes is disabled
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Stop all active resources: false
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Default stickiness: 0
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: notice: unpack_config: On loss of CCM Quorum: Ignore
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: unpack_domains: Unpacking domains
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-1 is online
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-2 is online
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: LVM_drive_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource LVM_drive active on Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: process_orphan_resource: Detected orphan resource Lun_iscsi1 on Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: create_fake_resource: Orphan resource <primitive id="Lun_iscsi1" type="SCSTLun" class="ocf" provider="nas" />
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_Device_drive:0_last_failure_0 on Cluster-Server-1 returned 8 (master) instead of the expected value: 7 (not running)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Operation monitor found resource p_Device_drive:0 active in master mode on Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: process_orphan_resource: Detected orphan resource Target_iscsi1 on Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: create_fake_resource: Orphan resource <primitive id="Target_iscsi1" type="SCSTTarget" class="ocf" provider="nas" />
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: FS_nfs1_last_failure_0 on Cluster-Server-2 returned 5 (not installed) instead of the expected value: 7 (not running)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Preventing FS_nfs1 from re-starting on Cluster-Server-2: operation monitor failed 'not installed' (rc=5)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_Device_drive:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: ExportFS_nfs1_last_failure_0 on Cluster-Server-2 returned 5 (not installed) instead of the expected value: 7 (not running)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Preventing ExportFS_nfs1 from re-starting on Cluster-Server-2: operation monitor failed 'not installed' (rc=5)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: NFS_Server [p_NFS_Server]
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: iSCSI_Daemon [p_iSCSI_Daemon]
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: PingD [p_PingD]
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: clone_print:  Master/Slave Set: Device_drive [p_Device_drive]
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:0 active on Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:0 active on Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: short_print:      Masters: [ Cluster-Server-1 ]
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: short_print:      Slaves: [ Cluster-Server-2 ]
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: native_print: LVM_drive	(ocf::nas:LVM2):	Started Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: iSCSI_iscsi2
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: native_print:      Target_iscsi2	(ocf::nas:SCSTTarget):	Started Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: native_print:      Lun_iscsi2	(ocf::nas:SCSTLun):	Started Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: iSCSI_iscsi3
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: native_print:      Target_iscsi3	(ocf::nas:SCSTTarget):	Started Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: native_print:      Lun_iscsi3	(ocf::nas:SCSTLun):	Started Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: NFS_nfs1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: native_print:      FS_nfs1	(ocf::nas:Filesystem):	Started Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: native_print:      ExportFS_nfs1	(ocf::nas:exportfs):	Started Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: NFS_nfs2
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: native_print:      FS_nfs2	(ocf::nas:Filesystem):	Stopped 
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: native_print:      ExportFS_nfs2	(ocf::nas:exportfs):	Stopped 
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:0: preferring current location (node=Cluster-Server-1, weight=1)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:1: preferring current location (node=Cluster-Server-2, weight=1)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_NFS_Server:0
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_NFS_Server:1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 NFS_Server instances of a possible 2
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_iSCSI_Daemon:0
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_iSCSI_Daemon:1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 iSCSI_Daemon instances of a possible 2
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_PingD:0
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_PingD:1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 PingD instances of a possible 2
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_Device_drive:0
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_Device_drive:1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 Device_drive instances of a possible 2
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:0 master score: 10150
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: master_color: Promoting p_Device_drive:0 (Master Cluster-Server-1)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:1 master score: 10100
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: master_color: Device_drive: Promoted 1 instances of a possible 1 to master
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to LVM_drive
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Target_iscsi2
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Lun_iscsi2
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Target_iscsi3
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Lun_iscsi3
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to FS_nfs1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to ExportFS_nfs1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to FS_nfs2
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to ExportFS_nfs2
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Lun_iscsi1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Lun_iscsi1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Target_iscsi1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Target_iscsi1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_create_probe: Probing FS_nfs2 on Cluster-Server-1 (Stopped)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_create_probe: Probing ExportFS_nfs2 on Cluster-Server-1 (Stopped)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_create_probe: Probing FS_nfs2 on Cluster-Server-2 (Stopped)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_create_probe: Probing ExportFS_nfs2 on Cluster-Server-2 (Stopped)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: master_create_actions: Creating actions for Device_drive
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: RecurringOp:  Start recurring monitor (20s) for FS_nfs2 on Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: RecurringOp:  Start recurring monitor (10s) for ExportFS_nfs2 on Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='master-p_Device_drive:1'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[3])
Sep 10 15:27:06 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-master-p_Device_drive.1" name="master-p_Device_drive:1" value="10000" />
Sep 10 15:27:06 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: pingd (100)
Sep 10 15:27:06 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke_callback: Invoking the PE: query=277, ref=pe_calc-dc-1347283626-166, seq=312, quorate=1
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.15.24 -> 0.15.25 (S_POLICY_ENGINE)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.15.25 -> 0.15.26 (S_POLICY_ENGINE)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:0	(Started Cluster-Server-1)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:1	(Started Cluster-Server-2)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:0	(Started Cluster-Server-1)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:1	(Started Cluster-Server-2)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:0	(Started Cluster-Server-1)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:1	(Started Cluster-Server-2)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_Device_drive:0	(Master Cluster-Server-1)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_Device_drive:1	(Slave Cluster-Server-2)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   LVM_drive	(Started Cluster-Server-1)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Target_iscsi2	(Started Cluster-Server-1)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Lun_iscsi2	(Started Cluster-Server-1)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Target_iscsi3	(Started Cluster-Server-1)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Lun_iscsi3	(Started Cluster-Server-1)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   FS_nfs1	(Started Cluster-Server-1)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   ExportFS_nfs1	(Started Cluster-Server-1)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: notice: LogActions: Start   FS_nfs2	(Cluster-Server-1)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: notice: LogActions: Start   ExportFS_nfs2	(Cluster-Server-1)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.15.26 -> 0.15.27 (S_POLICY_ENGINE)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_PE_SUCCESS from handle_response() received in state S_POLICY_ENGINE
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log <create_reply_adv origin="process_pe_message" t="crmd" version="3.0.6" subt="response" reference="pe_calc-dc-1347283626-166" crm_task="pe_calc" crm_sys_to="dc" crm_sys_from="pengine" crm-tgraph-in="/var/lib/pengine/pe-input-17.bz2" graph-errors="0" graph-warnings="0" config-errors="0" config-warnings="0" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   <crm_xml >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log     <transition_graph cluster-delay="60s" stonith-timeout="60s" failed-stop-offset="INFINITY" failed-start-offset="INFINITY" batch-limit="30" transition_id="17" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="0" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="109" operation="running" operation_key="NFS_nfs2_running_0" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:27:06 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-probe_complete" name="probe_complete" value="true" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="104" operation="start" operation_key="FS_nfs2_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="106" operation="start" operation_key="ExportFS_nfs2_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="108" operation="start" operation_key="NFS_nfs2_start_0" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="1" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="108" operation="start" operation_key="NFS_nfs2_start_0" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="2" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="105" operation="monitor" operation_key="FS_nfs2_monitor_20000" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="FS_nfs2" long-id="NFS_nfs2:FS_nfs2" class="ocf" provider="nas" type="Filesystem" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_interval="20000" CRM_meta_name="monitor" CRM_meta_timeout="40000" crm_feature_set="3.0.6" device="/dev/drive-CSD/nfs2_NFS" directory="/volumes/nfs2" force_clones="false" fstype="xfs" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="104" operation="start" operation_key="FS_nfs2_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="3" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="104" operation="start" operation_key="FS_nfs2_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="FS_nfs2" long-id="NFS_nfs2:FS_nfs2" class="ocf" provider="nas" type="Filesystem" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_name="start" CRM_meta_timeout="60000" crm_feature_set="3.0.6" device="/dev/drive-CSD/nfs2_NFS" directory="/volumes/nfs2" force_clones="false" fstype="xfs" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="16" operation="probe_complete" operation_key="probe_complete" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="108" operation="start" operation_key="NFS_nfs2_start_0" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="4" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="21" operation="monitor" operation_key="FS_nfs2_monitor_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="FS_nfs2" long-id="NFS_nfs2:FS_nfs2" class="ocf" provider="nas" type="Filesystem" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" crm_feature_set="3.0.6" device="/dev/drive-CSD/nfs2_NFS" directory="/volumes/nfs2" force_clones="false" fstype="xfs" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="5" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="18" operation="monitor" operation_key="FS_nfs2_monitor_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="FS_nfs2" long-id="NFS_nfs2:FS_nfs2" class="ocf" provider="nas" type="Filesystem" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" crm_feature_set="3.0.6" device="/dev/drive-CSD/nfs2_NFS" directory="/volumes/nfs2" force_clones="false" fstype="xfs" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="6" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="107" operation="monitor" operation_key="ExportFS_nfs2_monitor_10000" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="ExportFS_nfs2" long-id="NFS_nfs2:ExportFS_nfs2" class="ocf" provider="nas" type="exportfs" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_interval="10000" CRM_meta_name="monitor" CRM_meta_timeout="20000" clientspec="*" crm_feature_set="3.0.6" directory="/volumes/nfs2" fsid="2fbaecfe-fb4b-11e1-a319-000c290247c7" options="rw,insecure,async,no_subtree_check,root_squash,no_all_squash,anonuid=1000,anongid=100" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="106" operation="start" operation_key="ExportFS_nfs2_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="7" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="106" operation="start" operation_key="ExportFS_nfs2_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="ExportFS_nfs2" long-id="NFS_nfs2:ExportFS_nfs2" class="ocf" provider="nas" type="exportfs" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_name="start" CRM_meta_timeout="40000" clientspec="*" crm_feature_set="3.0.6" directory="/volumes/nfs2" fsid="2fbaecfe-fb4b-11e1-a319-000c290247c7" options="rw,insecure,async,no_subtree_check,root_squash,no_all_squash,anonuid=1000,anongid=100" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="16" operation="probe_complete" operation_key="probe_complete" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="104" operation="start" operation_key="FS_nfs2_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="108" operation="start" operation_key="NFS_nfs2_start_0" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="8" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="22" operation="monitor" operation_key="ExportFS_nfs2_monitor_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="ExportFS_nfs2" long-id="NFS_nfs2:ExportFS_nfs2" class="ocf" provider="nas" type="exportfs" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" clientspec="*" crm_feature_set="3.0.6" directory="/volumes/nfs2" fsid="2fbaecfe-fb4b-11e1-a319-000c290247c7" options="rw,insecure,async,no_subtree_check,root_squash,no_all_squash,anonuid=1000,anongid=100" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="9" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="19" operation="monitor" operation_key="ExportFS_nfs2_monitor_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="ExportFS_nfs2" long-id="NFS_nfs2:ExportFS_nfs2" class="ocf" provider="nas" type="exportfs" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" clientspec="*" crm_feature_set="3.0.6" directory="/volumes/nfs2" fsid="2fbaecfe-fb4b-11e1-a319-000c290247c7" options="rw,insecure,async,no_subtree_check,root_squash,no_all_squash,anonuid=1000,anongid=100" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="10" priority="1000000" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="20" operation="probe_complete" operation_key="probe_complete" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_no_wait="true" crm_feature_set="3.0.6" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="21" operation="monitor" operation_key="FS_nfs2_monitor_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="22" operation="monitor" operation_key="ExportFS_nfs2_monitor_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="11" priority="1000000" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="17" operation="probe_complete" operation_key="probe_complete" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:27:06 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_no_wait="true" crm_feature_set="3.0.6" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="18" operation="monitor" operation_key="FS_nfs2_monitor_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="19" operation="monitor" operation_key="ExportFS_nfs2_monitor_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="12" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="16" operation="probe_complete" operation_key="probe_complete" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes crm_feature_set="3.0.6" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="17" operation="probe_complete" operation_key="probe_complete" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="20" operation="probe_complete" operation_key="probe_complete" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" />
Sep 10 15:27:06 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-pingd" name="pingd" value="100" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:27:06 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 150 for probe_complete=true passed
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log     </transition_graph>
Sep 10 15:27:06 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 152 for master-p_Device_drive:1=10000 passed
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   </crm_xml>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log </create_reply_adv>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:27:06 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 154 for probe_complete=true passed
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_INVOKE
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: unpack_graph: Unpacked transition 17: 13 actions in 13 synapses
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: info: do_te_invoke: Processing graph 17 (ref=pe_calc-dc-1347283626-166) derived from /var/lib/pengine/pe-input-17.bz2
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.15.27 -> 0.15.28 (S_TRANSITION_ENGINE)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: te_pseudo_action: Pseudo action 108 fired and confirmed
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 21: monitor FS_nfs2_monitor_0 on Cluster-Server-2 (local)
Sep 10 15:27:06 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 156 for pingd=100 passed
Sep 10 15:27:06 Cluster-Server-2 lrmd: [40194]: debug: on_msg_add_rsc:client [40197] adds resource FS_nfs2
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_lrm_rsc_op: Performing key=21:17:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=FS_nfs2_monitor_0
Sep 10 15:27:06 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op:2399: copying parameters for rsc FS_nfs2
Sep 10 15:27:06 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op: add an operation operation monitor[22] on FS_nfs2 for client 40197, its parameters: crm_feature_set=[3.0.6] device=[/dev/drive-CSD/nfs2_NFS] directory=[/volumes/nfs2] force_clones=[false] fstype=[xfs] CRM_meta_timeout=[20000]  to the operation list.
Sep 10 15:27:06 Cluster-Server-2 lrmd: [40194]: info: rsc:FS_nfs2 probe[22] (pid 4411)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 18: monitor FS_nfs2_monitor_0 on Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 22: monitor ExportFS_nfs2_monitor_0 on Cluster-Server-2 (local)
Sep 10 15:27:06 Cluster-Server-2 lrmd: [40194]: debug: on_msg_add_rsc:client [40197] adds resource ExportFS_nfs2
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_lrm_rsc_op: Performing key=22:17:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=ExportFS_nfs2_monitor_0
Sep 10 15:27:06 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op:2399: copying parameters for rsc ExportFS_nfs2
Sep 10 15:27:06 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op: add an operation operation monitor[23] on ExportFS_nfs2 for client 40197, its parameters: crm_feature_set=[3.0.6] options=[rw,insecure,async,no_subtree_check,root_squash,no_all_squash,anonuid=1directory=[/volumes/nfs2] fsid=[2fbaecfe-fb4b-11e1-a319-000c290247c7] clientspec=[*] CRM_meta_timeout=[20000]  to the operation list.
Sep 10 15:27:06 Cluster-Server-2 lrmd: [40194]: info: rsc:ExportFS_nfs2 probe[23] (pid 4412)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 19: monitor ExportFS_nfs2_monitor_0 on Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 17 (Complete=0, Pending=4, Fired=5, Skipped=0, Incomplete=8, Source=/var/lib/pengine/pe-input-17.bz2): In-progress
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.15.28 -> 0.15.29 (S_TRANSITION_ENGINE)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 17 (Complete=1, Pending=4, Fired=0, Skipped=0, Incomplete=8, Source=/var/lib/pengine/pe-input-17.bz2): In-progress
Sep 10 15:27:06 Cluster-Server-2 lrmd: [40194]: WARN: Managed FS_nfs2:monitor process 4411 exited with return code 5.
Sep 10 15:27:06 Cluster-Server-2 lrmd: [40194]: info: operation monitor[22] on FS_nfs2 for client 40197: pid 4411 exited with return code 5
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: do_update_resource: Updating resouce FS_nfs2 after complete monitor op (interval=0)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: info: process_lrm_event: LRM operation FS_nfs2_monitor_0 (call=22, rc=5, cib-update=278, confirmed=true) not installed
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: update_history_cache: Appending monitor op to history for 'FS_nfs2'
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.15.29 -> 0.15.30 (S_TRANSITION_ENGINE)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: WARN: status_from_rc: Action 21 (FS_nfs2_monitor_0) on Cluster-Server-2 failed (target: 7 vs. rc: 5): Error
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: match_graph_event:277 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=FS_nfs2_last_failure_0, magic=0:5;21:17:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66, cib=0.15.30) : Event failed
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: update_abort_priority: Abort priority upgraded from 0 to 1
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: update_abort_priority: Abort action done superceeded by restart
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action FS_nfs2_monitor_0 (21) confirmed on Cluster-Server-2 (rc=4)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 17 (Complete=2, Pending=3, Fired=0, Skipped=6, Incomplete=2, Source=/var/lib/pengine/pe-input-17.bz2): In-progress
Sep 10 15:27:06 Cluster-Server-2 lrmd: [40194]: WARN: Managed ExportFS_nfs2:monitor process 4412 exited with return code 5.
Sep 10 15:27:06 Cluster-Server-2 lrmd: [40194]: info: operation monitor[23] on ExportFS_nfs2 for client 40197: pid 4412 exited with return code 5
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: do_update_resource: Updating resouce ExportFS_nfs2 after complete monitor op (interval=0)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: notice: process_pe_message: Transition 17: PEngine Input stored in: /var/lib/pengine/pe-input-17.bz2
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: info: process_lrm_event: LRM operation ExportFS_nfs2_monitor_0 (call=23, rc=5, cib-update=279, confirmed=true) not installed
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: update_history_cache: Appending monitor op to history for 'ExportFS_nfs2'
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.15.30 -> 0.15.31 (S_TRANSITION_ENGINE)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: WARN: status_from_rc: Action 22 (ExportFS_nfs2_monitor_0) on Cluster-Server-2 failed (target: 7 vs. rc: 5): Error
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: match_graph_event:277 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=ExportFS_nfs2_last_failure_0, magic=0:5;22:17:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66, cib=0.15.31) : Event failed
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action ExportFS_nfs2_monitor_0 (22) confirmed on Cluster-Server-2 (rc=4)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 20: probe_complete probe_complete on Cluster-Server-2 (local) - no waiting
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: attrd_update_delegate: Sent update: probe_complete=true for localhost
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 17 (Complete=3, Pending=2, Fired=1, Skipped=6, Incomplete=1, Source=/var/lib/pengine/pe-input-17.bz2): In-progress
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 17 (Complete=4, Pending=2, Fired=0, Skipped=6, Incomplete=1, Source=/var/lib/pengine/pe-input-17.bz2): In-progress
Sep 10 15:27:06 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from crmd: probe_complete=true
Sep 10 15:27:06 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: true, Current: true, Stored: true
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.15.31 -> 0.15.32 (S_TRANSITION_ENGINE)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action ExportFS_nfs2_monitor_0 (19) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 17 (Complete=5, Pending=1, Fired=0, Skipped=6, Incomplete=1, Source=/var/lib/pengine/pe-input-17.bz2): In-progress
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.15.32 -> 0.15.33 (S_TRANSITION_ENGINE)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action FS_nfs2_monitor_0 (18) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 17: probe_complete probe_complete on Cluster-Server-1 - no waiting
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 17 (Complete=6, Pending=0, Fired=1, Skipped=6, Incomplete=0, Source=/var/lib/pengine/pe-input-17.bz2): In-progress
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: notice: run_graph: ==== Transition 17 (Complete=7, Pending=0, Fired=0, Skipped=6, Incomplete=0, Source=/var/lib/pengine/pe-input-17.bz2): Stopped
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: te_graph_trigger: Transition 17 is now complete
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Transition 17 status: restart - Event failed
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes are eligible to run resources.
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 280: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.15.33 -> 0.15.34 (S_POLICY_ENGINE)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.15.34 -> 0.15.35 (S_POLICY_ENGINE)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke_callback: Invoking the PE: query=280, ref=pe_calc-dc-1347283626-173, seq=312, quorate=1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: unpack_config: Startup probes: enabled
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH timeout: 60000
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH of failed nodes is disabled
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Stop all active resources: false
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Default stickiness: 0
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: notice: unpack_config: On loss of CCM Quorum: Ignore
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: unpack_domains: Unpacking domains
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-1 is online
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-2 is online
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: LVM_drive_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource LVM_drive active on Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: process_orphan_resource: Detected orphan resource Lun_iscsi1 on Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: create_fake_resource: Orphan resource <primitive id="Lun_iscsi1" type="SCSTLun" class="ocf" provider="nas" />
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_Device_drive:0_last_failure_0 on Cluster-Server-1 returned 8 (master) instead of the expected value: 7 (not running)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Operation monitor found resource p_Device_drive:0 active in master mode on Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: process_orphan_resource: Detected orphan resource Target_iscsi1 on Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: create_fake_resource: Orphan resource <primitive id="Target_iscsi1" type="SCSTTarget" class="ocf" provider="nas" />
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: FS_nfs1_last_failure_0 on Cluster-Server-2 returned 5 (not installed) instead of the expected value: 7 (not running)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Preventing FS_nfs1 from re-starting on Cluster-Server-2: operation monitor failed 'not installed' (rc=5)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_Device_drive:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: ExportFS_nfs1_last_failure_0 on Cluster-Server-2 returned 5 (not installed) instead of the expected value: 7 (not running)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Preventing ExportFS_nfs1 from re-starting on Cluster-Server-2: operation monitor failed 'not installed' (rc=5)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: FS_nfs2_last_failure_0 on Cluster-Server-2 returned 5 (not installed) instead of the expected value: 7 (not running)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Preventing FS_nfs2 from re-starting on Cluster-Server-2: operation monitor failed 'not installed' (rc=5)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: ExportFS_nfs2_last_failure_0 on Cluster-Server-2 returned 5 (not installed) instead of the expected value: 7 (not running)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Preventing ExportFS_nfs2 from re-starting on Cluster-Server-2: operation monitor failed 'not installed' (rc=5)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: NFS_Server [p_NFS_Server]
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: iSCSI_Daemon [p_iSCSI_Daemon]
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: PingD [p_PingD]
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: clone_print:  Master/Slave Set: Device_drive [p_Device_drive]
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:0 active on Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:0 active on Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: short_print:      Masters: [ Cluster-Server-1 ]
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: short_print:      Slaves: [ Cluster-Server-2 ]
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: native_print: LVM_drive	(ocf::nas:LVM2):	Started Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: iSCSI_iscsi2
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: native_print:      Target_iscsi2	(ocf::nas:SCSTTarget):	Started Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: native_print:      Lun_iscsi2	(ocf::nas:SCSTLun):	Started Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: iSCSI_iscsi3
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: native_print:      Target_iscsi3	(ocf::nas:SCSTTarget):	Started Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: native_print:      Lun_iscsi3	(ocf::nas:SCSTLun):	Started Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: NFS_nfs1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: native_print:      FS_nfs1	(ocf::nas:Filesystem):	Started Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: native_print:      ExportFS_nfs1	(ocf::nas:exportfs):	Started Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: NFS_nfs2
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: native_print:      FS_nfs2	(ocf::nas:Filesystem):	Stopped 
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: native_print:      ExportFS_nfs2	(ocf::nas:exportfs):	Stopped 
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:0: preferring current location (node=Cluster-Server-1, weight=1)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:1: preferring current location (node=Cluster-Server-2, weight=1)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_NFS_Server:0
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_NFS_Server:1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 NFS_Server instances of a possible 2
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_iSCSI_Daemon:0
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_iSCSI_Daemon:1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 iSCSI_Daemon instances of a possible 2
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_PingD:0
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_PingD:1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 PingD instances of a possible 2
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_Device_drive:0
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_Device_drive:1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 Device_drive instances of a possible 2
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:0 master score: 10150
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: master_color: Promoting p_Device_drive:0 (Master Cluster-Server-1)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:1 master score: 10100
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: master_color: Device_drive: Promoted 1 instances of a possible 1 to master
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to LVM_drive
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Target_iscsi2
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Lun_iscsi2
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Target_iscsi3
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Lun_iscsi3
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to FS_nfs1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to ExportFS_nfs1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to FS_nfs2
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to ExportFS_nfs2
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Lun_iscsi1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Lun_iscsi1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Target_iscsi1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Target_iscsi1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: debug: master_create_actions: Creating actions for Device_drive
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: RecurringOp:  Start recurring monitor (20s) for FS_nfs2 on Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: RecurringOp:  Start recurring monitor (10s) for ExportFS_nfs2 on Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:0	(Started Cluster-Server-1)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:1	(Started Cluster-Server-2)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:0	(Started Cluster-Server-1)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:1	(Started Cluster-Server-2)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:0	(Started Cluster-Server-1)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:1	(Started Cluster-Server-2)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_Device_drive:0	(Master Cluster-Server-1)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_Device_drive:1	(Slave Cluster-Server-2)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   LVM_drive	(Started Cluster-Server-1)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Target_iscsi2	(Started Cluster-Server-1)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Lun_iscsi2	(Started Cluster-Server-1)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Target_iscsi3	(Started Cluster-Server-1)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Lun_iscsi3	(Started Cluster-Server-1)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   FS_nfs1	(Started Cluster-Server-1)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   ExportFS_nfs1	(Started Cluster-Server-1)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: notice: LogActions: Start   FS_nfs2	(Cluster-Server-1)
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: notice: LogActions: Start   ExportFS_nfs2	(Cluster-Server-1)
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_PE_SUCCESS from handle_response() received in state S_POLICY_ENGINE
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log <create_reply_adv origin="process_pe_message" t="crmd" version="3.0.6" subt="response" reference="pe_calc-dc-1347283626-173" crm_task="pe_calc" crm_sys_to="dc" crm_sys_from="pengine" crm-tgraph-in="/var/lib/pengine/pe-input-18.bz2" graph-errors="0" graph-warnings="0" config-errors="0" config-warnings="0" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   <crm_xml >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log     <transition_graph cluster-delay="60s" stonith-timeout="60s" failed-stop-offset="INFINITY" failed-start-offset="INFINITY" batch-limit="30" transition_id="18" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="0" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="105" operation="running" operation_key="NFS_nfs2_running_0" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="100" operation="start" operation_key="FS_nfs2_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="102" operation="start" operation_key="ExportFS_nfs2_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="104" operation="start" operation_key="NFS_nfs2_start_0" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="1" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="104" operation="start" operation_key="NFS_nfs2_start_0" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="2" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="101" operation="monitor" operation_key="FS_nfs2_monitor_20000" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="FS_nfs2" long-id="NFS_nfs2:FS_nfs2" class="ocf" provider="nas" type="Filesystem" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_interval="20000" CRM_meta_name="monitor" CRM_meta_timeout="40000" crm_feature_set="3.0.6" device="/dev/drive-CSD/nfs2_NFS" directory="/volumes/nfs2" force_clones="false" fstype="xfs" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="100" operation="start" operation_key="FS_nfs2_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="3" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="100" operation="start" operation_key="FS_nfs2_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="FS_nfs2" long-id="NFS_nfs2:FS_nfs2" class="ocf" provider="nas" type="Filesystem" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_name="start" CRM_meta_timeout="60000" crm_feature_set="3.0.6" device="/dev/drive-CSD/nfs2_NFS" directory="/volumes/nfs2" force_clones="false" fstype="xfs" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="104" operation="start" operation_key="NFS_nfs2_start_0" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="4" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="103" operation="monitor" operation_key="ExportFS_nfs2_monitor_10000" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="ExportFS_nfs2" long-id="NFS_nfs2:ExportFS_nfs2" class="ocf" provider="nas" type="exportfs" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_interval="10000" CRM_meta_name="monitor" CRM_meta_timeout="20000" clientspec="*" crm_feature_set="3.0.6" directory="/volumes/nfs2" fsid="2fbaecfe-fb4b-11e1-a319-000c290247c7" options="rw,insecure,async,no_subtree_check,root_squash,no_all_squash,anonuid=1000,anongid=100" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="102" operation="start" operation_key="ExportFS_nfs2_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="5" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="102" operation="start" operation_key="ExportFS_nfs2_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="ExportFS_nfs2" long-id="NFS_nfs2:ExportFS_nfs2" class="ocf" provider="nas" type="exportfs" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_name="start" CRM_meta_timeout="40000" clientspec="*" crm_feature_set="3.0.6" directory="/volumes/nfs2" fsid="2fbaecfe-fb4b-11e1-a319-000c290247c7" options="rw,insecure,async,no_subtree_check,root_squash,no_all_squash,anonuid=1000,anongid=100" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="100" operation="start" operation_key="FS_nfs2_start_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="104" operation="start" operation_key="NFS_nfs2_start_0" />
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log     </transition_graph>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   </crm_xml>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log </create_reply_adv>
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_INVOKE
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: unpack_graph: Unpacked transition 18: 6 actions in 6 synapses
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: info: do_te_invoke: Processing graph 18 (ref=pe_calc-dc-1347283626-173) derived from /var/lib/pengine/pe-input-18.bz2
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: te_pseudo_action: Pseudo action 104 fired and confirmed
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 100: start FS_nfs2_start_0 on Cluster-Server-1
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 18 (Complete=0, Pending=1, Fired=2, Skipped=0, Incomplete=4, Source=/var/lib/pengine/pe-input-18.bz2): In-progress
Sep 10 15:27:06 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 18 (Complete=1, Pending=1, Fired=0, Skipped=0, Incomplete=4, Source=/var/lib/pengine/pe-input-18.bz2): In-progress
Sep 10 15:27:06 Cluster-Server-2 pengine: [40196]: notice: process_pe_message: Transition 18: PEngine Input stored in: /var/lib/pengine/pe-input-18.bz2
Sep 10 15:27:08 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_Device_drive:1 monitor[11] (pid 4578)
drbd(p_Device_drive:1)[4578]:	2012/09/10_15:27:08 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:27:08 Cluster-Server-2 crm_attribute: [4614]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:27:08 Cluster-Server-2 crm_attribute: [4614]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:27:08 Cluster-Server-2 crm_attribute: [4614]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:27:08 Cluster-Server-2 crm_attribute: [4614]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:27:08 Cluster-Server-2 crm_attribute: [4614]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:27:08 Cluster-Server-2 crm_attribute: [4614]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:27:08 Cluster-Server-2 crm_attribute: [4614]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:27:08 Cluster-Server-2 crm_attribute: [4614]: info: determine_host: Mapped Cluster-Server-2 to Cluster-Server-2
Sep 10 15:27:08 Cluster-Server-2 crm_attribute: [4614]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:27:08 Cluster-Server-2 crm_attribute: [4614]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:27:08 Cluster-Server-2 crm_attribute: [4614]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:1=10000 for Cluster-Server-2
Sep 10 15:27:08 Cluster-Server-2 crm_attribute: [4614]: info: attrd_lazy_update: Updated master-p_Device_drive:1=10000 for Cluster-Server-2
Sep 10 15:27:08 Cluster-Server-2 crm_attribute: [4614]: info: main: Update master-p_Device_drive:1=10000 sent via attrd
Sep 10 15:27:08 Cluster-Server-2 crm_attribute: [4614]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:27:08 Cluster-Server-2 crm_attribute: [4614]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:27:08 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:1=10000
Sep 10 15:27:08 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:1)[4578]:	2012/09/10_15:27:08 DEBUG: drive: Exit code 0
drbd(p_Device_drive:1)[4578]:	2012/09/10_15:27:08 DEBUG: drive: Command output: 
Sep 10 15:27:08 Cluster-Server-2 lrmd: [40194]: debug: RA output: (p_Device_drive:1:monitor:stdout) 

Sep 10 15:27:08 Cluster-Server-2 lrmd: [40194]: info: operation monitor[11] on p_Device_drive:1 for client 40197: pid 4578 exited with return code 0
Sep 10 15:27:09 Cluster-Server-1 lrmd: [48712]: info: Managed FS_nfs2:start process 59309 exited with return code 0.
Sep 10 15:27:09 Cluster-Server-1 lrmd: [48712]: info: operation start[42] on FS_nfs2 for client 48715: pid 59309 exited with return code 0
Sep 10 15:27:09 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce FS_nfs2 after complete start op (interval=0)
Sep 10 15:27:09 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation FS_nfs2_start_0 (call=42, rc=0, cib-update=85, confirmed=true) ok
Sep 10 15:27:09 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending start op to history for 'FS_nfs2'
Sep 10 15:27:09 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=101:18:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=FS_nfs2_monitor_20000
Sep 10 15:27:09 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation monitor[43] on FS_nfs2 for client 48715, its parameters: fstype=[xfs] CRM_meta_timeout=[40000] CRM_meta_name=[monitor] crm_feature_set=[3.0.6] device=[/dev/drive-CSD/nfs2_NFS] force_clones=[false] CRM_meta_interval=[20000] directory=[/volumes/nfs2]  to the operation list.
Sep 10 15:27:09 Cluster-Server-1 lrmd: [48712]: info: rsc:FS_nfs2 monitor[43] (pid 59389)
Sep 10 15:27:09 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=102:18:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=ExportFS_nfs2_start_0
Sep 10 15:27:09 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op:2399: copying parameters for rsc ExportFS_nfs2
Sep 10 15:27:09 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation start[44] on ExportFS_nfs2 for client 48715, its parameters: crm_feature_set=[3.0.6] directory=[/volumes/nfs2] fsid=[2fbaecfe-fb4b-11e1-a319-000c290247c7] CRM_meta_name=[start] options=[rw,insecure,async,no_subtree_check,root_squash,no_all_squash,anonuid=1clientspec=[*] CRM_meta_timeout=[40000]  to the operation list.
Sep 10 15:27:09 Cluster-Server-1 lrmd: [48712]: info: rsc:ExportFS_nfs2 start[44] (pid 59390)
exportfs(ExportFS_nfs2)[59390]:	2012/09/10_15:27:09 INFO: Directory /volumes/nfs2 is not exported to * (stopped).
exportfs(ExportFS_nfs2)[59390]:	2012/09/10_15:27:09 INFO: Directory /volumes/nfs2 is not exported to * (stopped).
exportfs(ExportFS_nfs2)[59390]:	2012/09/10_15:27:09 INFO: Exporting file system ...
exportfs(ExportFS_nfs2)[59390]:	2012/09/10_15:27:09 INFO: Exporting file system ...
Sep 10 15:27:09 Cluster-Server-1 lrmd: [48712]: info: Managed FS_nfs2:monitor process 59389 exited with return code 0.
Sep 10 15:27:09 Cluster-Server-1 lrmd: [48712]: info: operation monitor[43] on FS_nfs2 for client 48715: pid 59389 exited with return code 0
Sep 10 15:27:09 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce FS_nfs2 after complete monitor op (interval=20000)
Sep 10 15:27:09 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation FS_nfs2_monitor_20000 (call=43, rc=0, cib-update=86, confirmed=false) ok
Sep 10 15:27:09 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending monitor op to history for 'FS_nfs2'
exportfs(ExportFS_nfs2)[59390]:	2012/09/10_15:27:09 INFO: exporting *:/volumes/nfs2
exportfs(ExportFS_nfs2)[59390]:	2012/09/10_15:27:09 INFO: exporting *:/volumes/nfs2
exportfs(ExportFS_nfs2)[59390]:	2012/09/10_15:27:09 WARNING: rmtab backup /volumes/nfs2/.rmtab not found or not readable.
exportfs(ExportFS_nfs2)[59390]:	2012/09/10_15:27:09 WARNING: rmtab backup /volumes/nfs2/.rmtab not found or not readable.
exportfs(ExportFS_nfs2)[59390]:	2012/09/10_15:27:09 INFO: File system exported
exportfs(ExportFS_nfs2)[59390]:	2012/09/10_15:27:09 INFO: File system exported
Sep 10 15:27:09 Cluster-Server-1 lrmd: [48712]: info: Managed ExportFS_nfs2:start process 59390 exited with return code 0.
Sep 10 15:27:09 Cluster-Server-1 lrmd: [48712]: info: operation start[44] on ExportFS_nfs2 for client 48715: pid 59390 exited with return code 0
Sep 10 15:27:09 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce ExportFS_nfs2 after complete start op (interval=0)
Sep 10 15:27:09 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation ExportFS_nfs2_start_0 (call=44, rc=0, cib-update=87, confirmed=true) ok
Sep 10 15:27:09 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending start op to history for 'ExportFS_nfs2'
Sep 10 15:27:09 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=103:18:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=ExportFS_nfs2_monitor_10000
Sep 10 15:27:09 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation monitor[45] on ExportFS_nfs2 for client 48715, its parameters: options=[rw,insecure,async,no_subtree_check,root_squash,no_all_squash,anonuid=1CRM_meta_timeout=[20000] CRM_meta_name=[monitor] fsid=[2fbaecfe-fb4b-11e1-a319-000c290247c7] crm_feature_set=[3.0.6] CRM_meta_interval=[10000] clientspec=[*] directory=[/volumes/nfs2]  to the operation list.
Sep 10 15:27:09 Cluster-Server-1 lrmd: [48712]: info: rsc:ExportFS_nfs2 monitor[45] (pid 59456)
exportfs(ExportFS_nfs2)[59456]:	2012/09/10_15:27:09 INFO: Directory /volumes/nfs2 is exported to * (started).
exportfs(ExportFS_nfs2)[59456]:	2012/09/10_15:27:09 INFO: Directory /volumes/nfs2 is exported to * (started).
Sep 10 15:27:09 Cluster-Server-1 lrmd: [48712]: info: Managed ExportFS_nfs2:monitor process 59456 exited with return code 0.
Sep 10 15:27:09 Cluster-Server-1 lrmd: [48712]: info: operation monitor[45] on ExportFS_nfs2 for client 48715: pid 59456 exited with return code 0
Sep 10 15:27:09 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce ExportFS_nfs2 after complete monitor op (interval=10000)
Sep 10 15:27:09 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation ExportFS_nfs2_monitor_10000 (call=45, rc=0, cib-update=88, confirmed=false) ok
Sep 10 15:27:09 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending monitor op to history for 'ExportFS_nfs2'
Sep 10 15:27:09 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_Device_drive:0 monitor[11] (pid 59467)
drbd(p_Device_drive:0)[59467]:	2012/09/10_15:27:09 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:27:09 Cluster-Server-1 crm_attribute: [59497]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:27:09 Cluster-Server-1 crm_attribute: [59497]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:27:09 Cluster-Server-1 crm_attribute: [59497]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:27:09 Cluster-Server-1 crm_attribute: [59497]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:27:09 Cluster-Server-1 crm_attribute: [59497]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:27:09 Cluster-Server-1 crm_attribute: [59497]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:27:09 Cluster-Server-1 crm_attribute: [59497]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:27:09 Cluster-Server-1 crm_attribute: [59497]: info: determine_host: Mapped Cluster-Server-1 to Cluster-Server-1
Sep 10 15:27:09 Cluster-Server-1 crm_attribute: [59497]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:27:09 Cluster-Server-1 crm_attribute: [59497]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:27:09 Cluster-Server-1 crm_attribute: [59497]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:27:09 Cluster-Server-1 crm_attribute: [59497]: info: attrd_lazy_update: Updated master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:27:09 Cluster-Server-1 crm_attribute: [59497]: info: main: Update master-p_Device_drive:0=10000 sent via attrd
Sep 10 15:27:09 Cluster-Server-1 crm_attribute: [59497]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:27:09 Cluster-Server-1 crm_attribute: [59497]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:27:09 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:0=10000
Sep 10 15:27:09 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:0)[59467]:	2012/09/10_15:27:09 DEBUG: drive: Exit code 0
drbd(p_Device_drive:0)[59467]:	2012/09/10_15:27:09 DEBUG: drive: Command output: 
Sep 10 15:27:09 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_Device_drive:0:monitor:stdout) 

Sep 10 15:27:09 Cluster-Server-1 lrmd: [48712]: info: operation monitor[11] on p_Device_drive:0 for client 48715: pid 59467 exited with return code 8
Sep 10 15:27:09 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:09 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.15.35 -> 0.15.36 (S_TRANSITION_ENGINE)
Sep 10 15:27:09 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action FS_nfs2_start_0 (100) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:27:09 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 101: monitor FS_nfs2_monitor_20000 on Cluster-Server-1
Sep 10 15:27:09 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 102: start ExportFS_nfs2_start_0 on Cluster-Server-1
Sep 10 15:27:09 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 18 (Complete=2, Pending=2, Fired=2, Skipped=0, Incomplete=2, Source=/var/lib/pengine/pe-input-18.bz2): In-progress
Sep 10 15:27:09 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:09 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.15.36 -> 0.15.37 (S_TRANSITION_ENGINE)
Sep 10 15:27:09 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action FS_nfs2_monitor_20000 (101) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:27:09 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 18 (Complete=3, Pending=1, Fired=0, Skipped=0, Incomplete=2, Source=/var/lib/pengine/pe-input-18.bz2): In-progress
Sep 10 15:27:09 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:09 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.15.37 -> 0.15.38 (S_TRANSITION_ENGINE)
Sep 10 15:27:09 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action ExportFS_nfs2_start_0 (102) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:27:09 Cluster-Server-2 crmd: [40197]: debug: te_pseudo_action: Pseudo action 105 fired and confirmed
Sep 10 15:27:09 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 103: monitor ExportFS_nfs2_monitor_10000 on Cluster-Server-1
Sep 10 15:27:09 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 18 (Complete=4, Pending=1, Fired=2, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-18.bz2): In-progress
Sep 10 15:27:09 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 18 (Complete=5, Pending=1, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-18.bz2): In-progress
Sep 10 15:27:09 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:09 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.15.38 -> 0.15.39 (S_TRANSITION_ENGINE)
Sep 10 15:27:09 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action ExportFS_nfs2_monitor_10000 (103) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:27:09 Cluster-Server-2 crmd: [40197]: notice: run_graph: ==== Transition 18 (Complete=6, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-18.bz2): Complete
Sep 10 15:27:09 Cluster-Server-2 crmd: [40197]: debug: te_graph_trigger: Transition 18 is now complete
Sep 10 15:27:09 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE
Sep 10 15:27:09 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Transition 18 status: done - <null>
Sep 10 15:27:09 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_TE_SUCCESS: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:27:09 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:27:09 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_TE_SUCCESS from notify_crmd() received in state S_TRANSITION_ENGINE
Sep 10 15:27:09 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:27:09 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: Starting PEngine Recheck Timer
Sep 10 15:27:09 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started PEngine Recheck Timer (I_PE_CALC:900000ms), src=332
Sep 10 15:27:09 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:09 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:27:09 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:27:10 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_PingD:0 monitor[6] (pid 59514)
Sep 10 15:27:10 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_PingD:1 monitor[6] (pid 4670)
Sep 10 15:27:11 Cluster-Server-1 lrmd: [48712]: debug: rsc:FS_nfs1 monitor[37] (pid 59530)
Sep 10 15:27:11 Cluster-Server-1 lrmd: [48712]: debug: rsc:ExportFS_nfs1 monitor[39] (pid 59557)
Sep 10 15:27:11 Cluster-Server-1 lrmd: [48712]: info: operation monitor[37] on FS_nfs1 for client 48715: pid 59530 exited with return code 0
exportfs(ExportFS_nfs1)[59557]:	2012/09/10_15:27:11 INFO: Directory /volumes/nfs1 is exported to * (started).
exportfs(ExportFS_nfs1)[59557]:	2012/09/10_15:27:11 INFO: Directory /volumes/nfs1 is exported to * (started).
Sep 10 15:27:11 Cluster-Server-1 lrmd: [48712]: info: operation monitor[39] on ExportFS_nfs1 for client 48715: pid 59557 exited with return code 0
Sep 10 15:27:12 Cluster-Server-1 attrd_updater: [59634]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:27:12 Cluster-Server-1 attrd_updater: [59634]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:27:12 Cluster-Server-1 attrd_updater: [59634]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:27:12 Cluster-Server-1 attrd_updater: [59634]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:27:12 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:27:12 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:27:12 Cluster-Server-1 lrmd: [48712]: info: operation monitor[6] on p_PingD:0 for client 48715: pid 59514 exited with return code 0
Sep 10 15:27:12 Cluster-Server-2 attrd_updater: [4978]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:27:12 Cluster-Server-2 attrd_updater: [4978]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:27:12 Cluster-Server-2 attrd_updater: [4978]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:27:12 Cluster-Server-2 attrd_updater: [4978]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:27:12 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:27:12 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:27:12 Cluster-Server-2 lrmd: [40194]: info: operation monitor[6] on p_PingD:1 for client 40197: pid 4670 exited with return code 0
Sep 10 15:27:14 Cluster-Server-1 lrmd: [48712]: debug: rsc:Target_iscsi3 monitor[31] (pid 59642)
Sep 10 15:27:14 Cluster-Server-1 lrmd: [48712]: debug: rsc:Lun_iscsi3 monitor[33] (pid 59645)
SCSTTarget(Target_iscsi3)[59642]:	2012/09/10_15:27:14 DEBUG: Target_iscsi3 monitor : 0
Sep 10 15:27:14 Cluster-Server-1 lrmd: [48712]: info: operation monitor[31] on Target_iscsi3 for client 48715: pid 59642 exited with return code 0
SCSTLun(Lun_iscsi3)[59645]:	2012/09/10_15:27:14 INFO: Lun_iscsi3 monitor : 0
SCSTLun(Lun_iscsi3)[59645]:	2012/09/10_15:27:14 INFO: Lun_iscsi3 monitor : 0
Sep 10 15:27:14 Cluster-Server-1 lrmd: [48712]: info: operation monitor[33] on Lun_iscsi3 for client 48715: pid 59645 exited with return code 0
Sep 10 15:27:15 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_NFS_Server:0 monitor[8] (pid 59799)
Sep 10 15:27:15 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_iSCSI_Daemon:0 monitor[5] (pid 59800)
Sep 10 15:27:15 Cluster-Server-1 lrmd: [48712]: info: operation monitor[8] on p_NFS_Server:0 for client 48715: pid 59799 exited with return code 0
Sep 10 15:27:15 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_NFS_Server:0:monitor:stdout) nfsd running

Sep 10 15:27:15 Cluster-Server-1 lrmd: [48712]: info: operation monitor[5] on p_iSCSI_Daemon:0 for client 48715: pid 59800 exited with return code 0
Sep 10 15:27:15 Cluster-Server-1 lrmd: [48712]: debug: rsc:Target_iscsi2 monitor[21] (pid 59844)
Sep 10 15:27:15 Cluster-Server-1 lrmd: [48712]: debug: rsc:Lun_iscsi2 monitor[23] (pid 59849)
SCSTTarget(Target_iscsi2)[59844]:	2012/09/10_15:27:15 DEBUG: Target_iscsi2 monitor : 0
Sep 10 15:27:15 Cluster-Server-1 lrmd: [48712]: info: operation monitor[21] on Target_iscsi2 for client 48715: pid 59844 exited with return code 0
SCSTLun(Lun_iscsi2)[59849]:	2012/09/10_15:27:15 INFO: Lun_iscsi2 monitor : 0
SCSTLun(Lun_iscsi2)[59849]:	2012/09/10_15:27:15 INFO: Lun_iscsi2 monitor : 0
Sep 10 15:27:15 Cluster-Server-1 lrmd: [48712]: info: operation monitor[23] on Lun_iscsi2 for client 48715: pid 59849 exited with return code 0
Sep 10 15:27:15 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_NFS_Server:1 monitor[8] (pid 5208)
Sep 10 15:27:15 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_iSCSI_Daemon:1 monitor[5] (pid 5209)
Sep 10 15:27:15 Cluster-Server-2 lrmd: [40194]: debug: RA output: (p_NFS_Server:1:monitor:stdout) nfsd running

Sep 10 15:27:15 Cluster-Server-2 lrmd: [40194]: info: operation monitor[8] on p_NFS_Server:1 for client 40197: pid 5208 exited with return code 0
Sep 10 15:27:15 Cluster-Server-2 lrmd: [40194]: info: operation monitor[5] on p_iSCSI_Daemon:1 for client 40197: pid 5209 exited with return code 0
Sep 10 15:27:19 Cluster-Server-1 lrmd: [48712]: debug: rsc:ExportFS_nfs2 monitor[45] (pid 60147)
exportfs(ExportFS_nfs2)[60147]:	2012/09/10_15:27:19 INFO: Directory /volumes/nfs2 is exported to * (started).
exportfs(ExportFS_nfs2)[60147]:	2012/09/10_15:27:19 INFO: Directory /volumes/nfs2 is exported to * (started).
Sep 10 15:27:19 Cluster-Server-1 lrmd: [48712]: info: operation monitor[45] on ExportFS_nfs2 for client 48715: pid 60147 exited with return code 0
Sep 10 15:27:19 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_Device_drive:0 monitor[11] (pid 60206)
drbd(p_Device_drive:0)[60206]:	2012/09/10_15:27:19 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:27:19 Cluster-Server-1 crm_attribute: [60245]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:27:19 Cluster-Server-1 crm_attribute: [60245]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:27:19 Cluster-Server-1 crm_attribute: [60245]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:27:19 Cluster-Server-1 crm_attribute: [60245]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:27:19 Cluster-Server-1 crm_attribute: [60245]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:27:19 Cluster-Server-1 crm_attribute: [60245]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:27:19 Cluster-Server-1 crm_attribute: [60245]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:27:19 Cluster-Server-1 crm_attribute: [60245]: info: determine_host: Mapped Cluster-Server-1 to Cluster-Server-1
Sep 10 15:27:19 Cluster-Server-1 crm_attribute: [60245]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:27:19 Cluster-Server-1 crm_attribute: [60245]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:27:19 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:0=10000
Sep 10 15:27:19 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
Sep 10 15:27:19 Cluster-Server-1 crm_attribute: [60245]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:27:19 Cluster-Server-1 crm_attribute: [60245]: info: attrd_lazy_update: Updated master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:27:19 Cluster-Server-1 crm_attribute: [60245]: info: main: Update master-p_Device_drive:0=10000 sent via attrd
Sep 10 15:27:19 Cluster-Server-1 crm_attribute: [60245]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:27:19 Cluster-Server-1 crm_attribute: [60245]: info: crm_xml_cleanup: Cleaning up memory from libxml2
drbd(p_Device_drive:0)[60206]:	2012/09/10_15:27:19 DEBUG: drive: Exit code 0
drbd(p_Device_drive:0)[60206]:	2012/09/10_15:27:19 DEBUG: drive: Command output: 
Sep 10 15:27:19 Cluster-Server-1 lrmd: [48712]: info: operation monitor[11] on p_Device_drive:0 for client 48715: pid 60206 exited with return code 8
Sep 10 15:27:19 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_Device_drive:0:monitor:stdout) 

Sep 10 15:27:21 Cluster-Server-1 lrmd: [48712]: debug: rsc:ExportFS_nfs1 monitor[39] (pid 60298)
exportfs(ExportFS_nfs1)[60298]:	2012/09/10_15:27:21 INFO: Directory /volumes/nfs1 is exported to * (started).
exportfs(ExportFS_nfs1)[60298]:	2012/09/10_15:27:21 INFO: Directory /volumes/nfs1 is exported to * (started).
Sep 10 15:27:21 Cluster-Server-1 lrmd: [48712]: info: operation monitor[39] on ExportFS_nfs1 for client 48715: pid 60298 exited with return code 0
Sep 10 15:27:22 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_PingD:0 monitor[6] (pid 60314)
Sep 10 15:27:22 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_PingD:1 monitor[6] (pid 5784)
Sep 10 15:27:22 Cluster-Server-2 cib: [40192]: debug: activateCibXml: Triggering CIB write for cib_replace op
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:22 Cluster-Server-2 cib: [40192]: info: cib:diff: - <cib admin_epoch="0" epoch="15" num_updates="39" >
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_replace): 0.15.39 -> 0.16.1 (S_IDLE)
Sep 10 15:27:22 Cluster-Server-2 cib: [40192]: info: cib:diff: -   <configuration >
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.16.1) : Non-status change
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause <diff crm_feature_set="3.0.6" >
Sep 10 15:27:22 Cluster-Server-2 cib: [40192]: info: cib:diff: -     <resources >
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   <diff-removed admin_epoch="0" epoch="15" num_updates="39" >
Sep 10 15:27:22 Cluster-Server-2 cib: [40192]: info: cib:diff: -       <group id="NFS_nfs1" >
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     <cib admin_epoch="0" epoch="15" num_updates="39" >
Sep 10 15:27:22 Cluster-Server-2 cib: [40192]: info: cib:diff: -         <primitive id="FS_nfs1" >
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       <configuration >
Sep 10 15:27:22 Cluster-Server-2 cib: [40192]: info: cib:diff: -           <meta_attributes id="FS_nfs1-meta_attributes" >
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         <resources >
Sep 10 15:27:22 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <nvpair id="FS_nfs1-meta_attributes-target-role" name="target-role" value="Started" __crm_diff_marker__="removed:top" />
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <group id="NFS_nfs1" >
Sep 10 15:27:22 Cluster-Server-2 cib: [40192]: info: cib:diff: -           </meta_attributes>
Sep 10 15:27:22 Cluster-Server-2 cib: [40192]: info: cib:diff: -         </primitive>
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <primitive id="FS_nfs1" >
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <meta_attributes id="FS_nfs1-meta_attributes" >
Sep 10 15:27:22 Cluster-Server-2 cib: [40192]: info: cib:diff: -         <primitive id="ExportFS_nfs1" >
Sep 10 15:27:22 Cluster-Server-2 cib: [40192]: info: cib:diff: -           <meta_attributes id="ExportFS_nfs1-meta_attributes" >
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="FS_nfs1-meta_attributes-target-role" name="target-role" value="Started" __crm_diff_marker__="removed:top" />
Sep 10 15:27:22 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <nvpair id="ExportFS_nfs1-meta_attributes-target-role" name="target-role" value="Started" __crm_diff_marker__="removed:top" />
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </meta_attributes>
Sep 10 15:27:22 Cluster-Server-2 cib: [40192]: info: cib:diff: -           </meta_attributes>
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             </primitive>
Sep 10 15:27:22 Cluster-Server-2 cib: [40192]: info: cib:diff: -         </primitive>
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <primitive id="ExportFS_nfs1" >
Sep 10 15:27:22 Cluster-Server-2 cib: [40192]: info: cib:diff: -       </group>
Sep 10 15:27:22 Cluster-Server-2 cib: [40192]: info: cib:diff: -     </resources>
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <meta_attributes id="ExportFS_nfs1-meta_attributes" >
Sep 10 15:27:22 Cluster-Server-2 cib: [40192]: info: cib:diff: -   </configuration>
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="ExportFS_nfs1-meta_attributes-target-role" name="target-role" value="Started" __crm_diff_marker__="removed:top" />
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </meta_attributes>
Sep 10 15:27:22 Cluster-Server-2 cib: [40192]: info: cib:diff: - </cib>
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             </primitive>
Sep 10 15:27:22 Cluster-Server-2 cib: [40192]: info: cib:diff: + <cib epoch="16" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="Cluster-Server-1" update-client="cibadmin" cib-last-written="Mon Sep 10 15:27:06 2012" have-quorum="1" dc-uuid="Cluster-Server-2" >
Sep 10 15:27:22 Cluster-Server-2 cib: [40192]: info: cib:diff: +   <configuration >
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           </group>
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         </resources>
Sep 10 15:27:22 Cluster-Server-2 cib: [40192]: info: cib:diff: +     <resources >
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       </configuration>
Sep 10 15:27:22 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <group id="NFS_nfs1" >
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     </cib>
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   </diff-removed>
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   <diff-added >
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     <cib epoch="16" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="Cluster-Server-1" update-client="cibadmin" cib-last-written="Mon Sep 10 15:27:06 2012" have-quorum="1" dc-uuid="Cluster-Server-2" >
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       <configuration >
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         <resources >
Sep 10 15:27:22 Cluster-Server-2 cib: [40192]: info: cib:diff: +         <meta_attributes id="NFS_nfs1-meta_attributes" __crm_diff_marker__="added:top" >
Sep 10 15:27:22 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <nvpair id="NFS_nfs1-meta_attributes-target-role" name="target-role" value="Stopped" />
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <group id="NFS_nfs1" >
Sep 10 15:27:22 Cluster-Server-2 cib: [40192]: info: cib:diff: +         </meta_attributes>
Sep 10 15:27:22 Cluster-Server-2 cib: [40192]: info: cib:diff: +       </group>
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <meta_attributes id="NFS_nfs1-meta_attributes" __crm_diff_marker__="added:top" >
Sep 10 15:27:22 Cluster-Server-2 cib: [40192]: info: cib:diff: +     </resources>
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <nvpair id="NFS_nfs1-meta_attributes-target-role" name="target-role" value="Stopped" />
Sep 10 15:27:22 Cluster-Server-2 cib: [40192]: info: cib:diff: +   </configuration>
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             </meta_attributes>
Sep 10 15:27:22 Cluster-Server-2 cib: [40192]: info: cib:diff: + </cib>
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           </group>
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         </resources>
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       </configuration>
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     </cib>
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   </diff-added>
Sep 10 15:27:22 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_replace for section resources (origin=Cluster-Server-1/cibadmin/2, version=0.16.1): ok (rc=0)
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause </diff>
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_IDLE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes are eligible to run resources.
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:27:22 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 281: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:27:22 Cluster-Server-2 cib: [40192]: debug: Forking temp process write_cib_contents
Sep 10 15:27:23 Cluster-Server-1 cib: [48709]: info: apply_xml_diff: Digest mis-match: expected 3a2ae85c18bcc0caac2c445e79aaabc7, calculated 2fa5d48bc5c3cbbc1f57b6aeca085db0
Sep 10 15:27:23 Cluster-Server-1 cib: [48709]: notice: cib_process_diff: Diff 0.15.39 -> 0.16.1 not applied to 0.15.39: Failed application of an update diff
Sep 10 15:27:23 Cluster-Server-1 cib: [48709]: info: cib_server_process_diff: Requesting re-sync from peer
Sep 10 15:27:23 Cluster-Server-1 cib: [48709]: debug: activateCibXml: Triggering CIB write for cib_replace op
Sep 10 15:27:23 Cluster-Server-1 cib: [48709]: info: cib_replace_notify: Replaced: -1.-1.-1 -> 0.16.1 from Cluster-Server-2
Sep 10 15:27:23 Cluster-Server-1 crmd: [48715]: debug: do_cib_replaced: Updating the CIB after a replace: DC=false
Sep 10 15:27:23 Cluster-Server-1 attrd: [48713]: info: do_cib_replaced: Sending full refresh
Sep 10 15:27:23 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_Device_drive:0 (10000)
Sep 10 15:27:23 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='master-p_Device_drive:0'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[3])
Sep 10 15:27:23 Cluster-Server-1 cib: [48709]: debug: Forking temp process write_cib_contents
Sep 10 15:27:23 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-master-p_Device_drive.0" name="master-p_Device_drive:0" value="10000" />
Sep 10 15:27:23 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: pingd (100)
Sep 10 15:27:23 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Sep 10 15:27:23 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:27:23 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-probe_complete" name="probe_complete" value="true" />
Sep 10 15:27:23 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:27:23 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-pingd" name="pingd" value="100" />
Sep 10 15:27:23 Cluster-Server-1 crmd: [48715]: debug: cancel_op: Cancelling op 39 for ExportFS_nfs1 (ExportFS_nfs1:39)
Sep 10 15:27:23 Cluster-Server-1 lrmd: [48712]: info: cancel_op: operation monitor[39] on ExportFS_nfs1 for client 48715, its parameters: options=[rw,insecure,async,no_subtree_check,root_squash,no_all_squash,anonuid=1CRM_meta_timeout=[20000] CRM_meta_name=[monitor] fsid=[1955f364-fb4b-11e1-b02e-000c290247c7] crm_feature_set=[3.0.6] CRM_meta_interval=[10000] clientspec=[*] directory=[/volumes/nfs1]  cancelled
Sep 10 15:27:23 Cluster-Server-1 lrmd: [48712]: debug: on_msg_cancel_op: operation 39 cancelled
Sep 10 15:27:23 Cluster-Server-1 crmd: [48715]: debug: cancel_op: Op 39 for ExportFS_nfs1 (ExportFS_nfs1:39): cancelled
Sep 10 15:27:23 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=95:19:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=ExportFS_nfs1_stop_0
Sep 10 15:27:23 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation stop[46] on ExportFS_nfs1 for client 48715, its parameters: crm_feature_set=[3.0.6] CRM_meta_name=[stop] CRM_meta_timeout=[10000]  to the operation list.
Sep 10 15:27:23 Cluster-Server-1 lrmd: [48712]: info: rsc:ExportFS_nfs1 stop[46] (pid 60393)
Sep 10 15:27:23 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation ExportFS_nfs1_monitor_10000 (call=39, status=1, cib-update=0, confirmed=true) Cancelled
Sep 10 15:27:23 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending monitor op to history for 'ExportFS_nfs1'
exportfs(ExportFS_nfs1)[60393]:	2012/09/10_15:27:23 INFO: Directory /volumes/nfs1 is exported to * (started).
exportfs(ExportFS_nfs1)[60393]:	2012/09/10_15:27:23 INFO: Directory /volumes/nfs1 is exported to * (started).
exportfs(ExportFS_nfs1)[60393]:	2012/09/10_15:27:23 INFO: Un-exporting file system ...
Sep 10 15:27:23 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 156 for master-p_Device_drive:0=10000 passed
exportfs(ExportFS_nfs1)[60393]:	2012/09/10_15:27:23 INFO: Un-exporting file system ...
Sep 10 15:27:23 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 158 for probe_complete=true passed
exportfs(ExportFS_nfs1)[60393]:	2012/09/10_15:27:23 INFO: unexporting *:/volumes/nfs1
exportfs(ExportFS_nfs1)[60393]:	2012/09/10_15:27:23 INFO: unexporting *:/volumes/nfs1
exportfs(ExportFS_nfs1)[60393]:	2012/09/10_15:27:23 INFO: Un-exported file system
exportfs(ExportFS_nfs1)[60393]:	2012/09/10_15:27:23 INFO: Un-exported file system
Sep 10 15:27:23 Cluster-Server-1 lrmd: [48712]: info: Managed ExportFS_nfs1:stop process 60393 exited with return code 0.
Sep 10 15:27:23 Cluster-Server-1 lrmd: [48712]: info: operation stop[46] on ExportFS_nfs1 for client 48715: pid 60393 exited with return code 0
Sep 10 15:27:23 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce ExportFS_nfs1 after complete stop op (interval=0)
Sep 10 15:27:23 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation ExportFS_nfs1_stop_0 (call=46, rc=0, cib-update=89, confirmed=true) ok
Sep 10 15:27:23 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending stop op to history for 'ExportFS_nfs1'
Sep 10 15:27:23 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 160 for pingd=100 passed
Sep 10 15:27:23 Cluster-Server-1 cib: [48709]: info: Managed write_cib_contents process 60392 exited with return code 0.
Sep 10 15:27:23 Cluster-Server-1 crmd: [48715]: debug: cancel_op: Cancelling op 37 for FS_nfs1 (FS_nfs1:37)
Sep 10 15:27:23 Cluster-Server-1 lrmd: [48712]: info: cancel_op: operation monitor[37] on FS_nfs1 for client 48715, its parameters: fstype=[xfs] CRM_meta_timeout=[40000] CRM_meta_name=[monitor] crm_feature_set=[3.0.6] device=[/dev/drive-CSD/nfs1_NFS] force_clones=[false] CRM_meta_interval=[20000] directory=[/volumes/nfs1]  cancelled
Sep 10 15:27:23 Cluster-Server-1 lrmd: [48712]: debug: on_msg_cancel_op: operation 37 cancelled
Sep 10 15:27:23 Cluster-Server-1 crmd: [48715]: debug: cancel_op: Op 37 for FS_nfs1 (FS_nfs1:37): cancelled
Sep 10 15:27:23 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=94:19:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=FS_nfs1_stop_0
Sep 10 15:27:23 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation stop[47] on FS_nfs1 for client 48715, its parameters: crm_feature_set=[3.0.6] CRM_meta_name=[stop] CRM_meta_timeout=[60000]  to the operation list.
Sep 10 15:27:23 Cluster-Server-1 lrmd: [48712]: info: rsc:FS_nfs1 stop[47] (pid 60423)
Sep 10 15:27:23 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation FS_nfs1_monitor_20000 (call=37, status=1, cib-update=0, confirmed=true) Cancelled
Sep 10 15:27:23 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending monitor op to history for 'FS_nfs1'
Filesystem(FS_nfs1)[60423]:	2012/09/10_15:27:23 INFO: Running stop for /dev/drive-CSD/nfs1_NFS on /volumes/nfs1
Filesystem(FS_nfs1)[60423]:	2012/09/10_15:27:23 INFO: Running stop for /dev/drive-CSD/nfs1_NFS on /volumes/nfs1
Filesystem(FS_nfs1)[60423]:	2012/09/10_15:27:23 INFO: Trying to unmount /volumes/nfs1
Filesystem(FS_nfs1)[60423]:	2012/09/10_15:27:23 INFO: Trying to unmount /volumes/nfs1
Filesystem(FS_nfs1)[60423]:	2012/09/10_15:27:23 INFO: unmounted /volumes/nfs1 successfully
Filesystem(FS_nfs1)[60423]:	2012/09/10_15:27:23 INFO: unmounted /volumes/nfs1 successfully
Sep 10 15:27:23 Cluster-Server-1 lrmd: [48712]: info: Managed FS_nfs1:stop process 60423 exited with return code 0.
Sep 10 15:27:23 Cluster-Server-1 lrmd: [48712]: info: operation stop[47] on FS_nfs1 for client 48715: pid 60423 exited with return code 0
Sep 10 15:27:23 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce FS_nfs1 after complete stop op (interval=0)
Sep 10 15:27:23 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation FS_nfs1_stop_0 (call=47, rc=0, cib-update=90, confirmed=true) ok
Sep 10 15:27:23 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending stop op to history for 'FS_nfs1'
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke_callback: Invoking the PE: query=281, ref=pe_calc-dc-1347283642-178, seq=312, quorate=1
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: unpack_config: Startup probes: enabled
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH timeout: 60000
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH of failed nodes is disabled
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Stop all active resources: false
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Default stickiness: 0
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: notice: unpack_config: On loss of CCM Quorum: Ignore
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: unpack_domains: Unpacking domains
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-1 is online
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-2 is online
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: unpack_lrm_rsc_state: ExportFS_nfs1: Overwriting calculated next role Unknown with requested next role Stopped
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: unpack_lrm_rsc_state: FS_nfs1: Overwriting calculated next role Unknown with requested next role Stopped
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: LVM_drive_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource LVM_drive active on Cluster-Server-1
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: process_orphan_resource: Detected orphan resource Lun_iscsi1 on Cluster-Server-1
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: create_fake_resource: Orphan resource <primitive id="Lun_iscsi1" type="SCSTLun" class="ocf" provider="nas" />
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_Device_drive:0_last_failure_0 on Cluster-Server-1 returned 8 (master) instead of the expected value: 7 (not running)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Operation monitor found resource p_Device_drive:0 active in master mode on Cluster-Server-1
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: process_orphan_resource: Detected orphan resource Target_iscsi1 on Cluster-Server-1
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: create_fake_resource: Orphan resource <primitive id="Target_iscsi1" type="SCSTTarget" class="ocf" provider="nas" />
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: FS_nfs1_last_failure_0 on Cluster-Server-2 returned 5 (not installed) instead of the expected value: 7 (not running)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Preventing FS_nfs1 from re-starting on Cluster-Server-2: operation monitor failed 'not installed' (rc=5)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: unpack_lrm_rsc_state: FS_nfs1: Overwriting calculated next role Unknown with requested next role Stopped
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_Device_drive:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: ExportFS_nfs1_last_failure_0 on Cluster-Server-2 returned 5 (not installed) instead of the expected value: 7 (not running)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Preventing ExportFS_nfs1 from re-starting on Cluster-Server-2: operation monitor failed 'not installed' (rc=5)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: unpack_lrm_rsc_state: ExportFS_nfs1: Overwriting calculated next role Unknown with requested next role Stopped
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: FS_nfs2_last_failure_0 on Cluster-Server-2 returned 5 (not installed) instead of the expected value: 7 (not running)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Preventing FS_nfs2 from re-starting on Cluster-Server-2: operation monitor failed 'not installed' (rc=5)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: ExportFS_nfs2_last_failure_0 on Cluster-Server-2 returned 5 (not installed) instead of the expected value: 7 (not running)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Preventing ExportFS_nfs2 from re-starting on Cluster-Server-2: operation monitor failed 'not installed' (rc=5)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: NFS_Server [p_NFS_Server]
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: iSCSI_Daemon [p_iSCSI_Daemon]
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: PingD [p_PingD]
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: clone_print:  Master/Slave Set: Device_drive [p_Device_drive]
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:0 active on Cluster-Server-1
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:0 active on Cluster-Server-1
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: short_print:      Masters: [ Cluster-Server-1 ]
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: short_print:      Slaves: [ Cluster-Server-2 ]
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: native_print: LVM_drive	(ocf::nas:LVM2):	Started Cluster-Server-1
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: iSCSI_iscsi2
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: native_print:      Target_iscsi2	(ocf::nas:SCSTTarget):	Started Cluster-Server-1
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: native_print:      Lun_iscsi2	(ocf::nas:SCSTLun):	Started Cluster-Server-1
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: iSCSI_iscsi3
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: native_print:      Target_iscsi3	(ocf::nas:SCSTTarget):	Started Cluster-Server-1
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: native_print:      Lun_iscsi3	(ocf::nas:SCSTLun):	Started Cluster-Server-1
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: NFS_nfs1
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: native_print:      FS_nfs1	(ocf::nas:Filesystem):	Started Cluster-Server-1
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: native_print:      ExportFS_nfs1	(ocf::nas:exportfs):	Started Cluster-Server-1
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: NFS_nfs2
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: native_print:      FS_nfs2	(ocf::nas:Filesystem):	Started Cluster-Server-1
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: native_print:      ExportFS_nfs2	(ocf::nas:exportfs):	Started Cluster-Server-1
Sep 10 15:27:23 Cluster-Server-2 cib: [40192]: debug: sync_our_cib: Syncing CIB to Cluster-Server-1
Sep 10 15:27:23 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=Cluster-Server-1/Cluster-Server-1/(null), version=0.16.1): ok (rc=0)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:0: preferring current location (node=Cluster-Server-1, weight=1)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:1: preferring current location (node=Cluster-Server-2, weight=1)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_NFS_Server:0
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_NFS_Server:1
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 NFS_Server instances of a possible 2
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_iSCSI_Daemon:0
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_iSCSI_Daemon:1
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 iSCSI_Daemon instances of a possible 2
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_PingD:0
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_PingD:1
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 PingD instances of a possible 2
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_Device_drive:0
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_Device_drive:1
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 Device_drive instances of a possible 2
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:0 master score: 10150
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: master_color: Promoting p_Device_drive:0 (Master Cluster-Server-1)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:1 master score: 10100
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: master_color: Device_drive: Promoted 1 instances of a possible 1 to master
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to LVM_drive
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Target_iscsi2
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Lun_iscsi2
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Target_iscsi3
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Lun_iscsi3
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource FS_nfs1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for FS_nfs1
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: native_color: Resource FS_nfs1 cannot run anywhere
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource ExportFS_nfs1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for ExportFS_nfs1
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: native_color: Resource ExportFS_nfs1 cannot run anywhere
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to FS_nfs2
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to ExportFS_nfs2
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Lun_iscsi1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Lun_iscsi1
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Target_iscsi1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Target_iscsi1
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: debug: master_create_actions: Creating actions for Device_drive
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:0	(Started Cluster-Server-1)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:1	(Started Cluster-Server-2)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:0	(Started Cluster-Server-1)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:1	(Started Cluster-Server-2)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:0	(Started Cluster-Server-1)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:1	(Started Cluster-Server-2)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_Device_drive:0	(Master Cluster-Server-1)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_Device_drive:1	(Slave Cluster-Server-2)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   LVM_drive	(Started Cluster-Server-1)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Target_iscsi2	(Started Cluster-Server-1)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Lun_iscsi2	(Started Cluster-Server-1)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Target_iscsi3	(Started Cluster-Server-1)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Lun_iscsi3	(Started Cluster-Server-1)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: notice: LogActions: Stop    FS_nfs1	(Cluster-Server-1)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: notice: LogActions: Stop    ExportFS_nfs1	(Cluster-Server-1)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   FS_nfs2	(Started Cluster-Server-1)
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   ExportFS_nfs2	(Started Cluster-Server-1)
Sep 10 15:27:23 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='master-p_Device_drive:0'] does not exist
Sep 10 15:27:23 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update -22 for master-p_Device_drive:0=(null) passed
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_PE_SUCCESS from handle_response() received in state S_POLICY_ENGINE
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log <create_reply_adv origin="process_pe_message" t="crmd" version="3.0.6" subt="response" reference="pe_calc-dc-1347283642-178" crm_task="pe_calc" crm_sys_to="dc" crm_sys_from="pengine" crm-tgraph-in="/var/lib/pengine/pe-input-19.bz2" graph-errors="0" graph-warnings="0" config-errors="0" config-warnings="0" >
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   <crm_xml >
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log     <transition_graph cluster-delay="60s" stonith-timeout="60s" failed-stop-offset="INFINITY" failed-start-offset="INFINITY" batch-limit="30" transition_id="19" >
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="0" >
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="99" operation="stopped" operation_key="NFS_nfs1_stopped_0" >
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="94" operation="stop" operation_key="FS_nfs1_stop_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="95" operation="stop" operation_key="ExportFS_nfs1_stop_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="98" operation="stop" operation_key="NFS_nfs1_stop_0" />
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="1" >
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="98" operation="stop" operation_key="NFS_nfs1_stop_0" >
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="2" >
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="94" operation="stop" operation_key="FS_nfs1_stop_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="FS_nfs1" long-id="NFS_nfs1:FS_nfs1" class="ocf" provider="nas" type="Filesystem" />
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_name="stop" CRM_meta_timeout="60000" crm_feature_set="3.0.6" device="/dev/drive-CSD/nfs1_NFS" directory="/volumes/nfs1" force_clones="false" fstype="xfs" />
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="95" operation="stop" operation_key="ExportFS_nfs1_stop_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="98" operation="stop" operation_key="NFS_nfs1_stop_0" />
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="3" >
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="95" operation="stop" operation_key="ExportFS_nfs1_stop_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="ExportFS_nfs1" long-id="NFS_nfs1:ExportFS_nfs1" class="ocf" provider="nas" type="exportfs" />
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_name="stop" CRM_meta_timeout="10000" clientspec="*" crm_feature_set="3.0.6" directory="/volumes/nfs1" fsid="1955f364-fb4b-11e1-b02e-000c290247c7" options="rw,insecure,async,no_subtree_check,root_squash,no_all_squash,anonuid=1000,anongid=100" />
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="98" operation="stop" operation_key="NFS_nfs1_stop_0" />
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="4" >
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="17" operation="all_stopped" operation_key="all_stopped" >
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes crm_feature_set="3.0.6" />
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="94" operation="stop" operation_key="FS_nfs1_stop_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="95" operation="stop" operation_key="ExportFS_nfs1_stop_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log     </transition_graph>
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   </crm_xml>
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log </create_reply_adv>
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_INVOKE
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: unpack_graph: Unpacked transition 19: 5 actions in 5 synapses
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: info: do_te_invoke: Processing graph 19 (ref=pe_calc-dc-1347283642-178) derived from /var/lib/pengine/pe-input-19.bz2
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.16.1 -> 0.16.2 (S_TRANSITION_ENGINE)
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: te_pseudo_action: Pseudo action 98 fired and confirmed
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 95: stop ExportFS_nfs1_stop_0 on Cluster-Server-1
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 19 (Complete=0, Pending=1, Fired=2, Skipped=0, Incomplete=3, Source=/var/lib/pengine/pe-input-19.bz2): In-progress
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 19 (Complete=1, Pending=1, Fired=0, Skipped=0, Incomplete=3, Source=/var/lib/pengine/pe-input-19.bz2): In-progress
Sep 10 15:27:23 Cluster-Server-2 pengine: [40196]: notice: process_pe_message: Transition 19: PEngine Input stored in: /var/lib/pengine/pe-input-19.bz2
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.16.2 -> 0.16.3 (S_TRANSITION_ENGINE)
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.16.3 -> 0.16.4 (S_TRANSITION_ENGINE)
Sep 10 15:27:23 Cluster-Server-2 cib: [40192]: info: Managed write_cib_contents process 5811 exited with return code 0.
Sep 10 15:27:23 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:27:23 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-pingd" name="pingd" value="100" />
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.16.4 -> 0.16.5 (S_TRANSITION_ENGINE)
Sep 10 15:27:23 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:27:23 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-probe_complete" name="probe_complete" value="true" />
Sep 10 15:27:23 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 159 for pingd=100 passed
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.16.5 -> 0.16.6 (S_TRANSITION_ENGINE)
Sep 10 15:27:23 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 161 for probe_complete=true passed
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.16.6 -> 0.16.7 (S_TRANSITION_ENGINE)
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action ExportFS_nfs1_stop_0 (95) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 94: stop FS_nfs1_stop_0 on Cluster-Server-1
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 19 (Complete=2, Pending=1, Fired=1, Skipped=0, Incomplete=2, Source=/var/lib/pengine/pe-input-19.bz2): In-progress
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.16.7 -> 0.16.8 (S_TRANSITION_ENGINE)
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action FS_nfs1_stop_0 (94) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: te_pseudo_action: Pseudo action 99 fired and confirmed
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: te_pseudo_action: Pseudo action 17 fired and confirmed
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 19 (Complete=3, Pending=0, Fired=2, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-19.bz2): In-progress
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: notice: run_graph: ==== Transition 19 (Complete=5, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-19.bz2): Complete
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: te_graph_trigger: Transition 19 is now complete
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Transition 19 status: done - <null>
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_TE_SUCCESS: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_TE_SUCCESS from notify_crmd() received in state S_TRANSITION_ENGINE
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: Starting PEngine Recheck Timer
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started PEngine Recheck Timer (I_PE_CALC:900000ms), src=336
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:27:23 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:27:24 Cluster-Server-1 attrd_updater: [60497]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:27:24 Cluster-Server-1 attrd_updater: [60497]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:27:24 Cluster-Server-1 attrd_updater: [60497]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:27:24 Cluster-Server-1 attrd_updater: [60497]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:27:24 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:27:24 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:27:24 Cluster-Server-1 lrmd: [48712]: info: operation monitor[6] on p_PingD:0 for client 48715: pid 60314 exited with return code 0
Sep 10 15:27:24 Cluster-Server-1 lrmd: [48712]: debug: rsc:Target_iscsi3 monitor[31] (pid 60498)
Sep 10 15:27:24 Cluster-Server-1 lrmd: [48712]: debug: rsc:Lun_iscsi3 monitor[33] (pid 60499)
SCSTTarget(Target_iscsi3)[60498]:	2012/09/10_15:27:24 DEBUG: Target_iscsi3 monitor : 0
Sep 10 15:27:24 Cluster-Server-1 lrmd: [48712]: info: operation monitor[31] on Target_iscsi3 for client 48715: pid 60498 exited with return code 0
SCSTLun(Lun_iscsi3)[60499]:	2012/09/10_15:27:24 INFO: Lun_iscsi3 monitor : 0
SCSTLun(Lun_iscsi3)[60499]:	2012/09/10_15:27:24 INFO: Lun_iscsi3 monitor : 0
Sep 10 15:27:24 Cluster-Server-1 lrmd: [48712]: info: operation monitor[33] on Lun_iscsi3 for client 48715: pid 60499 exited with return code 0
Sep 10 15:27:24 Cluster-Server-2 attrd_updater: [5858]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:27:24 Cluster-Server-2 attrd_updater: [5858]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:27:24 Cluster-Server-2 attrd_updater: [5858]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:27:24 Cluster-Server-2 attrd_updater: [5858]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:27:24 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:27:24 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:27:24 Cluster-Server-2 lrmd: [40194]: info: operation monitor[6] on p_PingD:1 for client 40197: pid 5784 exited with return code 0
Sep 10 15:27:25 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_resource: fail-count-FS_nfs1=<null>
Sep 10 15:27:25 Cluster-Server-1 attrd: [48713]: info: find_hash_entry: Creating hash entry for fail-count-FS_nfs1
Sep 10 15:27:25 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: (null), Current: (null), Stored: (null)
Sep 10 15:27:25 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_resource: fail-count-ExportFS_nfs1=<null>
Sep 10 15:27:25 Cluster-Server-1 attrd: [48713]: info: find_hash_entry: Creating hash entry for fail-count-ExportFS_nfs1
Sep 10 15:27:25 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: (null), Current: (null), Stored: (null)
Sep 10 15:27:25 Cluster-Server-1 cib: [48709]: info: apply_xml_diff: Digest mis-match: expected 6f392ebf34bb36cca0a05e928d37fbd8, calculated 62d2a7ef80be93f5ab8bfad2b36402ee
Sep 10 15:27:25 Cluster-Server-1 cib: [48709]: notice: cib_process_diff: Diff 0.16.8 -> 0.16.9 not applied to 0.16.8: Failed application of an update diff
Sep 10 15:27:25 Cluster-Server-1 cib: [48709]: info: cib_server_process_diff: Requesting re-sync from peer
Sep 10 15:27:25 Cluster-Server-1 crmd: [48715]: info: delete_resource: Removing resource FS_nfs1 for 60562_crm_resource (internal) on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-1 lrmd: [48712]: debug: lrmd_rsc_destroy: removing resource FS_nfs1
Sep 10 15:27:25 Cluster-Server-1 crmd: [48715]: debug: delete_rsc_entry: sync: Sending delete op for FS_nfs1
Sep 10 15:27:25 Cluster-Server-1 crmd: [48715]: info: notify_deleted: Notifying 60562_crm_resource on Cluster-Server-1 that FS_nfs1 was deleted
Sep 10 15:27:25 Cluster-Server-1 crmd: [48715]: WARN: decode_transition_key: Bad UUID (crm-resource-60562) in sscanf result (3) for 0:0:crm-resource-60562
Sep 10 15:27:25 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: send_direct_ack: Updating resouce FS_nfs1 after complete delete op (interval=60000)
Sep 10 15:27:25 Cluster-Server-1 crmd: [48715]: debug: send_direct_ack: ACK'ing resource op FS_nfs1_delete_60000 from 0:0:crm-resource-60562: lrm_invoke-lrmd-1347283645-37
Sep 10 15:27:25 Cluster-Server-1 crmd: [48715]: debug: notify_deleted: Triggering a refresh after 60562_crm_resource deleted FS_nfs1 from the LRM
Sep 10 15:27:25 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//cluster_property_set//nvpair[@name='last-lrm-refresh'] (/cib/configuration/crm_config/cluster_property_set/nvpair[8])
Sep 10 15:27:25 Cluster-Server-1 crmd: [48715]: debug: find_nvpair_attr_delegate: Match <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1347283482" />
Sep 10 15:27:25 Cluster-Server-1 cib: [48709]: debug: activateCibXml: Triggering CIB write for cib_replace op
Sep 10 15:27:25 Cluster-Server-1 cib: [48709]: debug: Forking temp process write_cib_contents
Sep 10 15:27:25 Cluster-Server-1 cib: [48709]: debug: activateCibXml: Triggering CIB write for cib_apply_diff op
Sep 10 15:27:25 Cluster-Server-1 cib: [48709]: info: apply_xml_diff: Digest mis-match: expected 714169222492ac936c7879de67465f3b, calculated f87c9c03c14cf6d7facc455d6bf62d2d
Sep 10 15:27:25 Cluster-Server-1 cib: [48709]: notice: cib_process_diff: Diff 0.17.1 -> 0.17.2 not applied to 0.17.1: Failed application of an update diff
Sep 10 15:27:25 Cluster-Server-1 cib: [48709]: info: cib_server_process_diff: Requesting re-sync from peer
Sep 10 15:27:25 Cluster-Server-1 crmd: [48715]: info: delete_resource: Removing resource ExportFS_nfs1 for 60562_crm_resource (internal) on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-1 lrmd: [48712]: debug: lrmd_rsc_destroy: removing resource ExportFS_nfs1
Sep 10 15:27:25 Cluster-Server-1 cib: [48709]: notice: cib_server_process_diff: Not applying diff 0.17.1 -> 0.17.2 (sync in progress)
Sep 10 15:27:25 Cluster-Server-1 crmd: [48715]: debug: delete_rsc_entry: sync: Sending delete op for ExportFS_nfs1
Sep 10 15:27:25 Cluster-Server-1 crmd: [48715]: info: notify_deleted: Notifying 60562_crm_resource on Cluster-Server-1 that ExportFS_nfs1 was deleted
Sep 10 15:27:25 Cluster-Server-1 crmd: [48715]: WARN: decode_transition_key: Bad UUID (crm-resource-60562) in sscanf result (3) for 0:0:crm-resource-60562
Sep 10 15:27:25 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: send_direct_ack: Updating resouce ExportFS_nfs1 after complete delete op (interval=60000)
Sep 10 15:27:25 Cluster-Server-1 crmd: [48715]: debug: send_direct_ack: ACK'ing resource op ExportFS_nfs1_delete_60000 from 0:0:crm-resource-60562: lrm_invoke-lrmd-1347283645-38
Sep 10 15:27:25 Cluster-Server-1 crmd: [48715]: debug: notify_deleted: Triggering a refresh after 60562_crm_resource deleted ExportFS_nfs1 from the LRM
Sep 10 15:27:25 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//cluster_property_set//nvpair[@name='last-lrm-refresh'] (/cib/configuration/crm_config/cluster_property_set/nvpair[8])
Sep 10 15:27:25 Cluster-Server-1 crmd: [48715]: debug: find_nvpair_attr_delegate: Match <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1347283645" />
Sep 10 15:27:25 Cluster-Server-1 cib: [48709]: info: Managed write_cib_contents process 60563 exited with return code 0.
Sep 10 15:27:25 Cluster-Server-1 cib: [48709]: notice: cib_server_process_diff: Not applying diff 0.17.1 -> 0.17.2 (sync in progress)
Sep 10 15:27:25 Cluster-Server-1 cib: [48709]: debug: Forking temp process write_cib_contents
Sep 10 15:27:25 Cluster-Server-1 cib: [48709]: notice: cib_server_process_diff: Not applying diff 0.17.2 -> 0.17.3 (sync in progress)
Sep 10 15:27:25 Cluster-Server-1 cib: [48709]: notice: cib_server_process_diff: Not applying diff 0.17.3 -> 0.17.4 (sync in progress)
Sep 10 15:27:25 Cluster-Server-1 cib: [48709]: debug: activateCibXml: Triggering CIB write for cib_replace op
Sep 10 15:27:25 Cluster-Server-1 cib: [48709]: info: cib_replace_notify: Replaced: -1.-1.-1 -> 0.17.3 from Cluster-Server-2
Sep 10 15:27:25 Cluster-Server-1 crmd: [48715]: debug: do_cib_replaced: Updating the CIB after a replace: DC=false
Sep 10 15:27:25 Cluster-Server-1 attrd: [48713]: info: do_cib_replaced: Sending full refresh
Sep 10 15:27:25 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_Device_drive:0 (10000)
Sep 10 15:27:25 Cluster-Server-1 cib: [48709]: info: Managed write_cib_contents process 60564 exited with return code 0.
Sep 10 15:27:25 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='master-p_Device_drive:0'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[3])
Sep 10 15:27:25 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-master-p_Device_drive.0" name="master-p_Device_drive:0" value="10000" />
Sep 10 15:27:25 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: pingd (100)
Sep 10 15:27:25 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Sep 10 15:27:25 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:27:25 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-probe_complete" name="probe_complete" value="true" />
Sep 10 15:27:25 Cluster-Server-1 cib: [48709]: debug: Forking temp process write_cib_contents
Sep 10 15:27:25 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:27:25 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-pingd" name="pingd" value="100" />
Sep 10 15:27:25 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 162 for master-p_Device_drive:0=10000 passed
Sep 10 15:27:25 Cluster-Server-1 lrmd: [48712]: debug: on_msg_add_rsc:client [48715] adds resource FS_nfs1
Sep 10 15:27:25 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=18:21:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=FS_nfs1_monitor_0
Sep 10 15:27:25 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op:2399: copying parameters for rsc FS_nfs1
Sep 10 15:27:25 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation monitor[48] on FS_nfs1 for client 48715, its parameters: crm_feature_set=[3.0.6] device=[/dev/drive-CSD/nfs1_NFS] directory=[/volumes/nfs1] force_clones=[false] fstype=[xfs] CRM_meta_timeout=[20000]  to the operation list.
Sep 10 15:27:25 Cluster-Server-1 lrmd: [48712]: info: rsc:FS_nfs1 probe[48] (pid 60583)
Sep 10 15:27:25 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 164 for probe_complete=true passed
Sep 10 15:27:25 Cluster-Server-1 lrmd: [48712]: debug: on_msg_add_rsc:client [48715] adds resource ExportFS_nfs1
Sep 10 15:27:25 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=19:21:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=ExportFS_nfs1_monitor_0
Sep 10 15:27:25 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op:2399: copying parameters for rsc ExportFS_nfs1
Sep 10 15:27:25 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation monitor[49] on ExportFS_nfs1 for client 48715, its parameters: crm_feature_set=[3.0.6] options=[rw,insecure,async,no_subtree_check,root_squash,no_all_squash,anonuid=1directory=[/volumes/nfs1] fsid=[1955f364-fb4b-11e1-b02e-000c290247c7] clientspec=[*] CRM_meta_timeout=[20000]  to the operation list.
Sep 10 15:27:25 Cluster-Server-1 lrmd: [48712]: info: rsc:ExportFS_nfs1 probe[49] (pid 60586)
Sep 10 15:27:25 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 166 for pingd=100 passed
exportfs(ExportFS_nfs1)[60586]:	2012/09/10_15:27:25 INFO: Directory /volumes/nfs1 is not exported to * (stopped).
exportfs(ExportFS_nfs1)[60586]:	2012/09/10_15:27:25 INFO: Directory /volumes/nfs1 is not exported to * (stopped).
Sep 10 15:27:25 Cluster-Server-1 lrmd: [48712]: WARN: Managed ExportFS_nfs1:monitor process 60586 exited with return code 7.
Sep 10 15:27:25 Cluster-Server-1 lrmd: [48712]: info: operation monitor[49] on ExportFS_nfs1 for client 48715: pid 60586 exited with return code 7
Sep 10 15:27:25 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce ExportFS_nfs1 after complete monitor op (interval=0)
Sep 10 15:27:25 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation ExportFS_nfs1_monitor_0 (call=49, rc=7, cib-update=99, confirmed=true) not running
Sep 10 15:27:25 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending monitor op to history for 'ExportFS_nfs1'
Sep 10 15:27:25 Cluster-Server-1 lrmd: [48712]: WARN: Managed FS_nfs1:monitor process 60583 exited with return code 7.
Sep 10 15:27:25 Cluster-Server-1 lrmd: [48712]: info: operation monitor[48] on FS_nfs1 for client 48715: pid 60583 exited with return code 7
Sep 10 15:27:25 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce FS_nfs1 after complete monitor op (interval=0)
Sep 10 15:27:25 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation FS_nfs1_monitor_0 (call=48, rc=7, cib-update=100, confirmed=true) not running
Sep 10 15:27:25 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending monitor op to history for 'FS_nfs1'
Sep 10 15:27:25 Cluster-Server-1 cib: [48709]: info: Managed write_cib_contents process 60572 exited with return code 0.
Sep 10 15:27:25 Cluster-Server-1 crmd: [48715]: debug: attrd_update_delegate: Sent update: probe_complete=true for localhost
Sep 10 15:27:25 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crmd: probe_complete=true
Sep 10 15:27:25 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: true, Current: true, Stored: true
Sep 10 15:27:25 Cluster-Server-1 lrmd: [48712]: debug: rsc:Target_iscsi2 monitor[21] (pid 60647)
Sep 10 15:27:25 Cluster-Server-1 lrmd: [48712]: debug: rsc:Lun_iscsi2 monitor[23] (pid 60652)
SCSTTarget(Target_iscsi2)[60647]:	2012/09/10_15:27:25 DEBUG: Target_iscsi2 monitor : 0
Sep 10 15:27:25 Cluster-Server-1 lrmd: [48712]: info: operation monitor[21] on Target_iscsi2 for client 48715: pid 60647 exited with return code 0
SCSTLun(Lun_iscsi2)[60652]:	2012/09/10_15:27:25 INFO: Lun_iscsi2 monitor : 0
SCSTLun(Lun_iscsi2)[60652]:	2012/09/10_15:27:25 INFO: Lun_iscsi2 monitor : 0
Sep 10 15:27:25 Cluster-Server-1 lrmd: [48712]: info: operation monitor[23] on Lun_iscsi2 for client 48715: pid 60652 exited with return code 0
Sep 10 15:27:25 Cluster-Server-2 attrd: [40195]: notice: attrd_ais_dispatch: Update relayed from Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from Cluster-Server-1: fail-count-FS_nfs1=<null>
Sep 10 15:27:25 Cluster-Server-2 attrd: [40195]: info: find_hash_entry: Creating hash entry for fail-count-FS_nfs1
Sep 10 15:27:25 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: (null), Current: (null), Stored: (null)
Sep 10 15:27:25 Cluster-Server-2 attrd: [40195]: notice: attrd_ais_dispatch: Update relayed from Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from Cluster-Server-1: fail-count-ExportFS_nfs1=<null>
Sep 10 15:27:25 Cluster-Server-2 attrd: [40195]: info: find_hash_entry: Creating hash entry for fail-count-ExportFS_nfs1
Sep 10 15:27:25 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: (null), Current: (null), Stored: (null)
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-1']//lrm_resource[@id='FS_nfs1'] (/cib/status/node_state[1]/lrm/lrm_resources/lrm_resource[3])
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_delete): 0.16.8 -> 0.16.9 (S_IDLE)
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: No match for deleted action //diff-added//cib//lrm_rsc_op[@id='FS_nfs1_last_0'] (FS_nfs1_last_0 on Cluster-Server-1)
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-1']//lrm_resource[@id='FS_nfs1'] (origin=Cluster-Server-1/crmd/91, version=0.16.8): ok (rc=0)
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:320 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=FS_nfs1_last_0, magic=0:0;94:19:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66, cib=0.16.9) : Resource op removal
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_IDLE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes are eligible to run resources.
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 282: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke_callback: Invoking the PE: query=282, ref=pe_calc-dc-1347283645-181, seq=312, quorate=1
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: debug: sync_our_cib: Syncing CIB to Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=Cluster-Server-1/Cluster-Server-1/(null), version=0.16.8): ok (rc=0)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: unpack_config: Startup probes: enabled
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH timeout: 60000
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH of failed nodes is disabled
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Stop all active resources: false
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Default stickiness: 0
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: notice: unpack_config: On loss of CCM Quorum: Ignore
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: unpack_domains: Unpacking domains
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-1 is online
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-2 is online
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_lrm_rsc_state: ExportFS_nfs1: Overwriting calculated next role Unknown with requested next role Stopped
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_lrm_rsc_state: FS_nfs1: Overwriting calculated next role Unknown with requested next role Stopped
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: LVM_drive_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource LVM_drive active on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: process_orphan_resource: Detected orphan resource Lun_iscsi1 on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: create_fake_resource: Orphan resource <primitive id="Lun_iscsi1" type="SCSTLun" class="ocf" provider="nas" />
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_Device_drive:0_last_failure_0 on Cluster-Server-1 returned 8 (master) instead of the expected value: 7 (not running)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Operation monitor found resource p_Device_drive:0 active in master mode on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: process_orphan_resource: Detected orphan resource Target_iscsi1 on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: create_fake_resource: Orphan resource <primitive id="Target_iscsi1" type="SCSTTarget" class="ocf" provider="nas" />
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: FS_nfs1_last_failure_0 on Cluster-Server-2 returned 5 (not installed) instead of the expected value: 7 (not running)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Preventing FS_nfs1 from re-starting on Cluster-Server-2: operation monitor failed 'not installed' (rc=5)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_lrm_rsc_state: FS_nfs1: Overwriting calculated next role Unknown with requested next role Stopped
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_Device_drive:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: ExportFS_nfs1_last_failure_0 on Cluster-Server-2 returned 5 (not installed) instead of the expected value: 7 (not running)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Preventing ExportFS_nfs1 from re-starting on Cluster-Server-2: operation monitor failed 'not installed' (rc=5)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_lrm_rsc_state: ExportFS_nfs1: Overwriting calculated next role Unknown with requested next role Stopped
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: FS_nfs2_last_failure_0 on Cluster-Server-2 returned 5 (not installed) instead of the expected value: 7 (not running)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Preventing FS_nfs2 from re-starting on Cluster-Server-2: operation monitor failed 'not installed' (rc=5)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: ExportFS_nfs2_last_failure_0 on Cluster-Server-2 returned 5 (not installed) instead of the expected value: 7 (not running)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Preventing ExportFS_nfs2 from re-starting on Cluster-Server-2: operation monitor failed 'not installed' (rc=5)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: NFS_Server [p_NFS_Server]
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: iSCSI_Daemon [p_iSCSI_Daemon]
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: PingD [p_PingD]
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: clone_print:  Master/Slave Set: Device_drive [p_Device_drive]
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-1']//lrm_resource[@id='FS_nfs1'] (/cib/status/node_state[1]/lrm/lrm_resources/lrm_resource[3])
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:0 active on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:0 active on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: short_print:      Masters: [ Cluster-Server-1 ]
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: short_print:      Slaves: [ Cluster-Server-2 ]
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: native_print: LVM_drive	(ocf::nas:LVM2):	Started Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: iSCSI_iscsi2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: native_print:      Target_iscsi2	(ocf::nas:SCSTTarget):	Started Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: native_print:      Lun_iscsi2	(ocf::nas:SCSTLun):	Started Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: iSCSI_iscsi3
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: native_print:      Target_iscsi3	(ocf::nas:SCSTTarget):	Started Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: native_print:      Lun_iscsi3	(ocf::nas:SCSTLun):	Started Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: NFS_nfs1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: native_print:      FS_nfs1	(ocf::nas:Filesystem):	Stopped 
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: native_print:      ExportFS_nfs1	(ocf::nas:exportfs):	Stopped 
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: NFS_nfs2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: native_print:      FS_nfs2	(ocf::nas:Filesystem):	Started Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: native_print:      ExportFS_nfs2	(ocf::nas:exportfs):	Started Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:0: preferring current location (node=Cluster-Server-1, weight=1)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:1: preferring current location (node=Cluster-Server-2, weight=1)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_NFS_Server:0
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_NFS_Server:1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 NFS_Server instances of a possible 2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_iSCSI_Daemon:0
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_iSCSI_Daemon:1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 iSCSI_Daemon instances of a possible 2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_PingD:0
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_PingD:1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 PingD instances of a possible 2
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-1']//lrm_resource[@id='FS_nfs1'] (origin=Cluster-Server-1/crmd/92, version=0.16.9): ok (rc=0)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_Device_drive:0
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_Device_drive:1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 Device_drive instances of a possible 2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:0 master score: 10150
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: master_color: Promoting p_Device_drive:0 (Master Cluster-Server-1)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:1 master score: 10100
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: master_color: Device_drive: Promoted 1 instances of a possible 1 to master
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to LVM_drive
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Target_iscsi2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Lun_iscsi2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Target_iscsi3
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Lun_iscsi3
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource FS_nfs1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for FS_nfs1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: native_color: Resource FS_nfs1 cannot run anywhere
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource ExportFS_nfs1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for ExportFS_nfs1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: native_color: Resource ExportFS_nfs1 cannot run anywhere
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to FS_nfs2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to ExportFS_nfs2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Lun_iscsi1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Lun_iscsi1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Target_iscsi1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Target_iscsi1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: master_create_actions: Creating actions for Device_drive
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:0	(Started Cluster-Server-1)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:1	(Started Cluster-Server-2)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:0	(Started Cluster-Server-1)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:1	(Started Cluster-Server-2)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:0	(Started Cluster-Server-1)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:1	(Started Cluster-Server-2)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_Device_drive:0	(Master Cluster-Server-1)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_Device_drive:1	(Slave Cluster-Server-2)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   LVM_drive	(Started Cluster-Server-1)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Target_iscsi2	(Started Cluster-Server-1)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Lun_iscsi2	(Started Cluster-Server-1)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Target_iscsi3	(Started Cluster-Server-1)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Lun_iscsi3	(Started Cluster-Server-1)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   FS_nfs1	(Stopped)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   ExportFS_nfs1	(Stopped)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   FS_nfs2	(Started Cluster-Server-1)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   ExportFS_nfs2	(Started Cluster-Server-1)
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: debug: activateCibXml: Triggering CIB write for cib_modify op
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: info: cib:diff: - <cib admin_epoch="0" epoch="16" num_updates="9" >
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: info: cib:diff: -   <configuration >
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: info: cib:diff: -     <crm_config >
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: info: cib:diff: -       <cluster_property_set id="cib-bootstrap-options" >
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: info: cib:diff: -         <nvpair value="1347283482" id="cib-bootstrap-options-last-lrm-refresh" />
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: info: cib:diff: -       </cluster_property_set>
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: info: cib:diff: -     </crm_config>
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: info: cib:diff: -   </configuration>
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: info: cib:diff: - </cib>
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: info: cib:diff: + <cib epoch="17" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="Cluster-Server-1" update-client="cibadmin" cib-last-written="Mon Sep 10 15:27:22 2012" have-quorum="1" dc-uuid="Cluster-Server-2" >
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: info: cib:diff: +   <configuration >
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: info: cib:diff: +     <crm_config >
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <cluster_property_set id="cib-bootstrap-options" >
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: info: cib:diff: +         <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1347283645" />
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: info: cib:diff: +       </cluster_property_set>
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: info: cib:diff: +     </crm_config>
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: info: cib:diff: +   </configuration>
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: info: cib:diff: + </cib>
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=Cluster-Server-1/crmd/94, version=0.17.1): ok (rc=0)
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-1']//lrm_resource[@id='ExportFS_nfs1'] (/cib/status/node_state[1]/lrm/lrm_resources/lrm_resource[1])
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-1']//lrm_resource[@id='ExportFS_nfs1'] (origin=Cluster-Server-1/crmd/95, version=0.17.1): ok (rc=0)
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-2']//lrm_resource[@id='FS_nfs1'] (/cib/status/node_state[2]/lrm/lrm_resources/lrm_resource[2])
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-2']//lrm_resource[@id='FS_nfs1'] (origin=local/crmd/283, version=0.17.1): ok (rc=0)
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: info: delete_resource: Removing resource FS_nfs1 for 60562_crm_resource (internal) on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 lrmd: [40194]: debug: lrmd_rsc_destroy: removing resource FS_nfs1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: notice: process_pe_message: Transition 20: PEngine Input stored in: /var/lib/pengine/pe-input-20.bz2
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: delete_rsc_entry: sync: Sending delete op for FS_nfs1
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: info: notify_deleted: Notifying 60562_crm_resource on Cluster-Server-1 that FS_nfs1 was deleted
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: WARN: decode_transition_key: Bad UUID (crm-resource-60562) in sscanf result (3) for 0:0:crm-resource-60562
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: send_direct_ack: Updating resouce FS_nfs1 after complete delete op (interval=60000)
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: send_direct_ack: ACK'ing resource op FS_nfs1_delete_60000 from 0:0:crm-resource-60562: lrm_invoke-lrmd-1347283645-182
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: notify_deleted: Triggering a refresh after 60562_crm_resource deleted FS_nfs1 from the LRM
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-2']//lrm_resource[@id='FS_nfs1'] (/cib/status/node_state[2]/lrm/lrm_resources/lrm_resource[2])
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-2']//lrm_resource[@id='FS_nfs1'] (origin=local/crmd/284, version=0.17.2): ok (rc=0)
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//cluster_property_set//nvpair[@name='last-lrm-refresh'] (/cib/configuration/crm_config/cluster_property_set/nvpair[8])
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: find_nvpair_attr_delegate: Match <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1347283645" />
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/286, version=0.17.3): ok (rc=0)
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-2']//lrm_resource[@id='ExportFS_nfs1'] (/cib/status/node_state[2]/lrm/lrm_resources/lrm_resource[11])
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-2']//lrm_resource[@id='ExportFS_nfs1'] (origin=local/crmd/287, version=0.17.3): ok (rc=0)
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: info: delete_resource: Removing resource ExportFS_nfs1 for 60562_crm_resource (internal) on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 lrmd: [40194]: debug: lrmd_rsc_destroy: removing resource ExportFS_nfs1
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: delete_rsc_entry: sync: Sending delete op for ExportFS_nfs1
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: info: notify_deleted: Notifying 60562_crm_resource on Cluster-Server-1 that ExportFS_nfs1 was deleted
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: WARN: decode_transition_key: Bad UUID (crm-resource-60562) in sscanf result (3) for 0:0:crm-resource-60562
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: send_direct_ack: Updating resouce ExportFS_nfs1 after complete delete op (interval=60000)
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: send_direct_ack: ACK'ing resource op ExportFS_nfs1_delete_60000 from 0:0:crm-resource-60562: lrm_invoke-lrmd-1347283645-183
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: notify_deleted: Triggering a refresh after 60562_crm_resource deleted ExportFS_nfs1 from the LRM
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: debug: sync_our_cib: Syncing CIB to Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=Cluster-Server-1/Cluster-Server-1/(null), version=0.17.3): ok (rc=0)
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-1']//lrm_resource[@id='ExportFS_nfs1'] (/cib/status/node_state[1]/lrm/lrm_resources/lrm_resource[1])
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-1']//lrm_resource[@id='ExportFS_nfs1'] (origin=Cluster-Server-1/crmd/96, version=0.17.4): ok (rc=0)
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=Cluster-Server-1/crmd/98, version=0.17.5): ok (rc=0)
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-2']//lrm_resource[@id='ExportFS_nfs1'] (/cib/status/node_state[2]/lrm/lrm_resources/lrm_resource[11])
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-2']//lrm_resource[@id='ExportFS_nfs1'] (origin=local/crmd/288, version=0.17.6): ok (rc=0)
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//cluster_property_set//nvpair[@name='last-lrm-refresh'] (/cib/configuration/crm_config/cluster_property_set/nvpair[8])
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: find_nvpair_attr_delegate: Match <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1347283645" />
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_delete): 0.16.8 -> 0.16.9 (S_POLICY_ENGINE)
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: No match for deleted action //diff-added//cib//lrm_rsc_op[@id='FS_nfs1_last_0'] (FS_nfs1_last_0 on Cluster-Server-1)
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:320 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=FS_nfs1_last_0, magic=0:0;94:19:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66, cib=0.16.9) : Resource op removal
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.16.9 -> 0.17.1 (S_POLICY_ENGINE)
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.17.1) : Non-status change
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause <diff crm_feature_set="3.0.6" >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   <diff-removed admin_epoch="0" epoch="16" num_updates="9" >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     <cib admin_epoch="0" epoch="16" num_updates="9" >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       <configuration >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         <crm_config >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <cluster_property_set id="cib-bootstrap-options" >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <nvpair value="1347283482" id="cib-bootstrap-options-last-lrm-refresh" />
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           </cluster_property_set>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         </crm_config>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       </configuration>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     </cib>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   </diff-removed>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   <diff-added >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     <cib epoch="17" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="Cluster-Server-1" update-client="cibadmin" cib-last-written="Mon Sep 10 15:27:22 2012" have-quorum="1" dc-uuid="Cluster-Server-2" >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       <configuration >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         <crm_config >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <cluster_property_set id="cib-bootstrap-options" >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1347283645" />
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           </cluster_property_set>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         </crm_config>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       </configuration>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     </cib>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   </diff-added>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause </diff>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_delete): 0.17.1 -> 0.17.2 (S_POLICY_ENGINE)
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: No match for deleted action //diff-added//cib//lrm_rsc_op[@id='ExportFS_nfs1_last_0'] (ExportFS_nfs1_last_0 on Cluster-Server-1)
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:320 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=ExportFS_nfs1_last_0, magic=0:0;95:19:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66, cib=0.17.2) : Resource op removal
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_delete): 0.17.1 -> 0.17.2 (S_POLICY_ENGINE)
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: No match for deleted action //diff-added//cib//lrm_rsc_op[@id='FS_nfs1_last_failure_0'] (FS_nfs1_last_failure_0 on Cluster-Server-2)
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:320 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=FS_nfs1_last_failure_0, magic=0:5;19:15:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66, cib=0.17.2) : Resource op removal
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_delete): 0.17.1 -> 0.17.2 (S_POLICY_ENGINE)
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: No match for deleted action //diff-added//cib//lrm_rsc_op[@id='FS_nfs1_last_failure_0'] (FS_nfs1_last_failure_0 on Cluster-Server-2)
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:320 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=FS_nfs1_last_failure_0, magic=0:5;19:15:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66, cib=0.17.2) : Resource op removal
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.17.2 -> 0.17.3 (S_POLICY_ENGINE)
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_delete): 0.17.3 -> 0.17.4 (S_POLICY_ENGINE)
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: No match for deleted action //diff-added//cib//lrm_rsc_op[@id='ExportFS_nfs1_last_failure_0'] (ExportFS_nfs1_last_failure_0 on Cluster-Server-2)
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:320 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=ExportFS_nfs1_last_failure_0, magic=0:5;20:15:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66, cib=0.17.4) : Resource op removal
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_delete): 0.17.3 -> 0.17.4 (S_POLICY_ENGINE)
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: No match for deleted action //diff-added//cib//lrm_rsc_op[@id='ExportFS_nfs1_last_0'] (ExportFS_nfs1_last_0 on Cluster-Server-1)
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:320 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=ExportFS_nfs1_last_0, magic=0:0;95:19:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66, cib=0.17.4) : Resource op removal
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.17.4 -> 0.17.5 (S_POLICY_ENGINE)
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_delete): 0.17.5 -> 0.17.6 (S_POLICY_ENGINE)
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: No match for deleted action //diff-added//cib//lrm_rsc_op[@id='ExportFS_nfs1_last_failure_0'] (ExportFS_nfs1_last_failure_0 on Cluster-Server-2)
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:320 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=ExportFS_nfs1_last_failure_0, magic=0:5;20:15:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66, cib=0.17.6) : Resource op removal
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: info: handle_response: pe_calc calculation pe_calc-dc-1347283645-181 is obsolete
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 291: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 292: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 293: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 294: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 295: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 296: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 297: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 298: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/290, version=0.17.7): ok (rc=0)
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.17.6 -> 0.17.7 (S_POLICY_ENGINE)
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: debug: Forking temp process write_cib_contents
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke_callback: Invoking the PE: query=298, ref=pe_calc-dc-1347283645-184, seq=312, quorate=1
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Call 299 : Parsing CIB options
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Shutdown escalation occurs after: 1200000ms
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Checking for expired actions every 900000ms
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: unpack_config: Startup probes: enabled
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH timeout: 60000
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH of failed nodes is disabled
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Stop all active resources: false
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Default stickiness: 0
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: notice: unpack_config: On loss of CCM Quorum: Ignore
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: unpack_domains: Unpacking domains
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-1 is online
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-2 is online
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: LVM_drive_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource LVM_drive active on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: process_orphan_resource: Detected orphan resource Lun_iscsi1 on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: create_fake_resource: Orphan resource <primitive id="Lun_iscsi1" type="SCSTLun" class="ocf" provider="nas" />
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_Device_drive:0_last_failure_0 on Cluster-Server-1 returned 8 (master) instead of the expected value: 7 (not running)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Operation monitor found resource p_Device_drive:0 active in master mode on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: process_orphan_resource: Detected orphan resource Target_iscsi1 on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: create_fake_resource: Orphan resource <primitive id="Target_iscsi1" type="SCSTTarget" class="ocf" provider="nas" />
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_Device_drive:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: FS_nfs2_last_failure_0 on Cluster-Server-2 returned 5 (not installed) instead of the expected value: 7 (not running)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Preventing FS_nfs2 from re-starting on Cluster-Server-2: operation monitor failed 'not installed' (rc=5)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: ExportFS_nfs2_last_failure_0 on Cluster-Server-2 returned 5 (not installed) instead of the expected value: 7 (not running)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Preventing ExportFS_nfs2 from re-starting on Cluster-Server-2: operation monitor failed 'not installed' (rc=5)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: NFS_Server [p_NFS_Server]
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: iSCSI_Daemon [p_iSCSI_Daemon]
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: PingD [p_PingD]
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: clone_print:  Master/Slave Set: Device_drive [p_Device_drive]
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:0 active on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:0 active on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: short_print:      Masters: [ Cluster-Server-1 ]
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: short_print:      Slaves: [ Cluster-Server-2 ]
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: native_print: LVM_drive	(ocf::nas:LVM2):	Started Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: iSCSI_iscsi2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: native_print:      Target_iscsi2	(ocf::nas:SCSTTarget):	Started Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: native_print:      Lun_iscsi2	(ocf::nas:SCSTLun):	Started Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: iSCSI_iscsi3
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: native_print:      Target_iscsi3	(ocf::nas:SCSTTarget):	Started Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: native_print:      Lun_iscsi3	(ocf::nas:SCSTLun):	Started Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: NFS_nfs1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: native_print:      FS_nfs1	(ocf::nas:Filesystem):	Stopped 
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: native_print:      ExportFS_nfs1	(ocf::nas:exportfs):	Stopped 
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: NFS_nfs2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: native_print:      FS_nfs2	(ocf::nas:Filesystem):	Started Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: native_print:      ExportFS_nfs2	(ocf::nas:exportfs):	Started Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:0: preferring current location (node=Cluster-Server-1, weight=1)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:1: preferring current location (node=Cluster-Server-2, weight=1)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_NFS_Server:0
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_NFS_Server:1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 NFS_Server instances of a possible 2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_iSCSI_Daemon:0
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_iSCSI_Daemon:1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 iSCSI_Daemon instances of a possible 2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_PingD:0
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_PingD:1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 PingD instances of a possible 2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_Device_drive:0
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_Device_drive:1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 Device_drive instances of a possible 2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:0 master score: 10150
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: master_color: Promoting p_Device_drive:0 (Master Cluster-Server-1)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:1 master score: 10100
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: master_color: Device_drive: Promoted 1 instances of a possible 1 to master
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to LVM_drive
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Target_iscsi2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Lun_iscsi2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Target_iscsi3
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Lun_iscsi3
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource FS_nfs1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for FS_nfs1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: native_color: Resource FS_nfs1 cannot run anywhere
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource ExportFS_nfs1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for ExportFS_nfs1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: native_color: Resource ExportFS_nfs1 cannot run anywhere
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to FS_nfs2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to ExportFS_nfs2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Lun_iscsi1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Lun_iscsi1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Target_iscsi1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Target_iscsi1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_create_probe: Probing FS_nfs1 on Cluster-Server-1 (Stopped)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_create_probe: Probing ExportFS_nfs1 on Cluster-Server-1 (Stopped)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_create_probe: Probing FS_nfs1 on Cluster-Server-2 (Stopped)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_create_probe: Probing ExportFS_nfs1 on Cluster-Server-2 (Stopped)
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: master_create_actions: Creating actions for Device_drive
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.17.7 -> 0.17.8 (S_POLICY_ENGINE)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:0	(Started Cluster-Server-1)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:1	(Started Cluster-Server-2)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:0	(Started Cluster-Server-1)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:1	(Started Cluster-Server-2)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:0	(Started Cluster-Server-1)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:1	(Started Cluster-Server-2)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_Device_drive:0	(Master Cluster-Server-1)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_Device_drive:1	(Slave Cluster-Server-2)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   LVM_drive	(Started Cluster-Server-1)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Target_iscsi2	(Started Cluster-Server-1)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Lun_iscsi2	(Started Cluster-Server-1)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Target_iscsi3	(Started Cluster-Server-1)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Lun_iscsi3	(Started Cluster-Server-1)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   FS_nfs1	(Stopped)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   ExportFS_nfs1	(Stopped)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   FS_nfs2	(Started Cluster-Server-1)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   ExportFS_nfs2	(Started Cluster-Server-1)
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.17.8 -> 0.17.9 (S_POLICY_ENGINE)
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_PE_SUCCESS from handle_response() received in state S_POLICY_ENGINE
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log <create_reply_adv origin="process_pe_message" t="crmd" version="3.0.6" subt="response" reference="pe_calc-dc-1347283645-184" crm_task="pe_calc" crm_sys_to="dc" crm_sys_from="pengine" crm-tgraph-in="/var/lib/pengine/pe-input-21.bz2" graph-errors="0" graph-warnings="0" config-errors="0" config-warnings="0" >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   <crm_xml >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log     <transition_graph cluster-delay="60s" stonith-timeout="60s" failed-stop-offset="INFINITY" failed-start-offset="INFINITY" batch-limit="30" transition_id="21" >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="0" >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="21" operation="monitor" operation_key="FS_nfs1_monitor_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="FS_nfs1" long-id="NFS_nfs1:FS_nfs1" class="ocf" provider="nas" type="Filesystem" />
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" crm_feature_set="3.0.6" device="/dev/drive-CSD/nfs1_NFS" directory="/volumes/nfs1" force_clones="false" fstype="xfs" />
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="1" >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="18" operation="monitor" operation_key="FS_nfs1_monitor_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="FS_nfs1" long-id="NFS_nfs1:FS_nfs1" class="ocf" provider="nas" type="Filesystem" />
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" crm_feature_set="3.0.6" device="/dev/drive-CSD/nfs1_NFS" directory="/volumes/nfs1" force_clones="false" fstype="xfs" />
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="2" >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="22" operation="monitor" operation_key="ExportFS_nfs1_monitor_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="ExportFS_nfs1" long-id="NFS_nfs1:ExportFS_nfs1" class="ocf" provider="nas" type="exportfs" />
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" clientspec="*" crm_feature_set="3.0.6" directory="/volumes/nfs1" fsid="1955f364-fb4b-11e1-b02e-000c290247c7" options="rw,insecure,async,no_subtree_check,root_squash,no_all_squash,anonuid=1000,anongid=100" />
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="3" >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="19" operation="monitor" operation_key="ExportFS_nfs1_monitor_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="ExportFS_nfs1" long-id="NFS_nfs1:ExportFS_nfs1" class="ocf" provider="nas" type="exportfs" />
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" clientspec="*" crm_feature_set="3.0.6" directory="/volumes/nfs1" fsid="1955f364-fb4b-11e1-b02e-000c290247c7" options="rw,insecure,async,no_subtree_check,root_squash,no_all_squash,anonuid=1000,anongid=100" />
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="4" priority="1000000" >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="20" operation="probe_complete" operation_key="probe_complete" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_no_wait="true" crm_feature_set="3.0.6" />
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="21" operation="monitor" operation_key="FS_nfs1_monitor_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" />
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="22" operation="monitor" operation_key="ExportFS_nfs1_monitor_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" />
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="5" priority="1000000" >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="17" operation="probe_complete" operation_key="probe_complete" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_no_wait="true" crm_feature_set="3.0.6" />
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="18" operation="monitor" operation_key="FS_nfs1_monitor_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="19" operation="monitor" operation_key="ExportFS_nfs1_monitor_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="6" >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="16" operation="probe_complete" operation_key="probe_complete" >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes crm_feature_set="3.0.6" />
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="17" operation="probe_complete" operation_key="probe_complete" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="20" operation="probe_complete" operation_key="probe_complete" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" />
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log     </transition_graph>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   </crm_xml>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log </create_reply_adv>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_INVOKE
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: unpack_graph: Unpacked transition 21: 7 actions in 7 synapses
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: info: do_te_invoke: Processing graph 21 (ref=pe_calc-dc-1347283645-184) derived from /var/lib/pengine/pe-input-21.bz2
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 21: monitor FS_nfs1_monitor_0 on Cluster-Server-2 (local)
Sep 10 15:27:25 Cluster-Server-2 lrmd: [40194]: debug: on_msg_add_rsc:client [40197] adds resource FS_nfs1
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_lrm_rsc_op: Performing key=21:21:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=FS_nfs1_monitor_0
Sep 10 15:27:25 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op:2399: copying parameters for rsc FS_nfs1
Sep 10 15:27:25 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op: add an operation operation monitor[24] on FS_nfs1 for client 40197, its parameters: crm_feature_set=[3.0.6] device=[/dev/drive-CSD/nfs1_NFS] directory=[/volumes/nfs1] force_clones=[false] fstype=[xfs] CRM_meta_timeout=[20000]  to the operation list.
Sep 10 15:27:25 Cluster-Server-2 lrmd: [40194]: info: rsc:FS_nfs1 probe[24] (pid 5904)
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 18: monitor FS_nfs1_monitor_0 on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 22: monitor ExportFS_nfs1_monitor_0 on Cluster-Server-2 (local)
Sep 10 15:27:25 Cluster-Server-2 lrmd: [40194]: debug: on_msg_add_rsc:client [40197] adds resource ExportFS_nfs1
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_lrm_rsc_op: Performing key=22:21:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=ExportFS_nfs1_monitor_0
Sep 10 15:27:25 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op:2399: copying parameters for rsc ExportFS_nfs1
Sep 10 15:27:25 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op: add an operation operation monitor[25] on ExportFS_nfs1 for client 40197, its parameters: crm_feature_set=[3.0.6] options=[rw,insecure,async,no_subtree_check,root_squash,no_all_squash,anonuid=1directory=[/volumes/nfs1] fsid=[1955f364-fb4b-11e1-b02e-000c290247c7] clientspec=[*] CRM_meta_timeout=[20000]  to the operation list.
Sep 10 15:27:25 Cluster-Server-2 lrmd: [40194]: info: rsc:ExportFS_nfs1 probe[25] (pid 5905)
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 19: monitor ExportFS_nfs1_monitor_0 on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 21 (Complete=0, Pending=4, Fired=4, Skipped=0, Incomplete=3, Source=/var/lib/pengine/pe-input-21.bz2): In-progress
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.17.9 -> 0.17.10 (S_TRANSITION_ENGINE)
Sep 10 15:27:25 Cluster-Server-2 lrmd: [40194]: WARN: Managed FS_nfs1:monitor process 5904 exited with return code 5.
Sep 10 15:27:25 Cluster-Server-2 lrmd: [40194]: info: operation monitor[24] on FS_nfs1 for client 40197: pid 5904 exited with return code 5
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: do_update_resource: Updating resouce FS_nfs1 after complete monitor op (interval=0)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: notice: process_pe_message: Transition 21: PEngine Input stored in: /var/lib/pengine/pe-input-21.bz2
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: info: process_lrm_event: LRM operation FS_nfs1_monitor_0 (call=24, rc=5, cib-update=300, confirmed=true) not installed
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: update_history_cache: Appending monitor op to history for 'FS_nfs1'
Sep 10 15:27:25 Cluster-Server-2 lrmd: [40194]: WARN: Managed ExportFS_nfs1:monitor process 5905 exited with return code 5.
Sep 10 15:27:25 Cluster-Server-2 lrmd: [40194]: info: operation monitor[25] on ExportFS_nfs1 for client 40197: pid 5905 exited with return code 5
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: info: Managed write_cib_contents process 5903 exited with return code 0.
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: do_update_resource: Updating resouce ExportFS_nfs1 after complete monitor op (interval=0)
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: info: process_lrm_event: LRM operation ExportFS_nfs1_monitor_0 (call=25, rc=5, cib-update=301, confirmed=true) not installed
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: update_history_cache: Appending monitor op to history for 'ExportFS_nfs1'
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.17.10 -> 0.17.11 (S_TRANSITION_ENGINE)
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: WARN: status_from_rc: Action 21 (FS_nfs1_monitor_0) on Cluster-Server-2 failed (target: 7 vs. rc: 5): Error
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: match_graph_event:277 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=FS_nfs1_last_failure_0, magic=0:5;21:21:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66, cib=0.17.11) : Event failed
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: update_abort_priority: Abort priority upgraded from 0 to 1
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: update_abort_priority: Abort action done superceeded by restart
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action FS_nfs1_monitor_0 (21) confirmed on Cluster-Server-2 (rc=4)
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 21 (Complete=1, Pending=3, Fired=0, Skipped=1, Incomplete=2, Source=/var/lib/pengine/pe-input-21.bz2): In-progress
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.17.11 -> 0.17.12 (S_TRANSITION_ENGINE)
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: WARN: status_from_rc: Action 22 (ExportFS_nfs1_monitor_0) on Cluster-Server-2 failed (target: 7 vs. rc: 5): Error
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: match_graph_event:277 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=ExportFS_nfs1_last_failure_0, magic=0:5;22:21:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66, cib=0.17.12) : Event failed
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action ExportFS_nfs1_monitor_0 (22) confirmed on Cluster-Server-2 (rc=4)
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 20: probe_complete probe_complete on Cluster-Server-2 (local) - no waiting
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: attrd_update_delegate: Sent update: probe_complete=true for localhost
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 21 (Complete=2, Pending=2, Fired=1, Skipped=1, Incomplete=1, Source=/var/lib/pengine/pe-input-21.bz2): In-progress
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 21 (Complete=3, Pending=2, Fired=0, Skipped=1, Incomplete=1, Source=/var/lib/pengine/pe-input-21.bz2): In-progress
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='master-p_Device_drive:0'] does not exist
Sep 10 15:27:25 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update -22 for master-p_Device_drive:0=(null) passed
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:27:25 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-pingd" name="pingd" value="100" />
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.17.12 -> 0.17.13 (S_TRANSITION_ENGINE)
Sep 10 15:27:25 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:27:25 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-probe_complete" name="probe_complete" value="true" />
Sep 10 15:27:25 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 164 for pingd=100 passed
Sep 10 15:27:25 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from crmd: probe_complete=true
Sep 10 15:27:25 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: true, Current: true, Stored: true
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.17.13 -> 0.17.14 (S_TRANSITION_ENGINE)
Sep 10 15:27:25 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 166 for probe_complete=true passed
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.17.14 -> 0.17.15 (S_TRANSITION_ENGINE)
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action ExportFS_nfs1_monitor_0 (19) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 21 (Complete=4, Pending=1, Fired=0, Skipped=1, Incomplete=1, Source=/var/lib/pengine/pe-input-21.bz2): In-progress
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.17.15 -> 0.17.16 (S_TRANSITION_ENGINE)
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action FS_nfs1_monitor_0 (18) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 17: probe_complete probe_complete on Cluster-Server-1 - no waiting
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 21 (Complete=5, Pending=0, Fired=1, Skipped=1, Incomplete=0, Source=/var/lib/pengine/pe-input-21.bz2): In-progress
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: notice: run_graph: ==== Transition 21 (Complete=6, Pending=0, Fired=0, Skipped=1, Incomplete=0, Source=/var/lib/pengine/pe-input-21.bz2): Stopped
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: te_graph_trigger: Transition 21 is now complete
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Transition 21 status: restart - Event failed
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes are eligible to run resources.
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 302: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke_callback: Invoking the PE: query=302, ref=pe_calc-dc-1347283645-191, seq=312, quorate=1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: unpack_config: Startup probes: enabled
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH timeout: 60000
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH of failed nodes is disabled
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Stop all active resources: false
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Default stickiness: 0
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: notice: unpack_config: On loss of CCM Quorum: Ignore
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: unpack_domains: Unpacking domains
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-1 is online
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-2 is online
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: LVM_drive_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource LVM_drive active on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: process_orphan_resource: Detected orphan resource Lun_iscsi1 on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: create_fake_resource: Orphan resource <primitive id="Lun_iscsi1" type="SCSTLun" class="ocf" provider="nas" />
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_Device_drive:0_last_failure_0 on Cluster-Server-1 returned 8 (master) instead of the expected value: 7 (not running)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Operation monitor found resource p_Device_drive:0 active in master mode on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: process_orphan_resource: Detected orphan resource Target_iscsi1 on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: create_fake_resource: Orphan resource <primitive id="Target_iscsi1" type="SCSTTarget" class="ocf" provider="nas" />
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_lrm_rsc_state: ExportFS_nfs1: Overwriting calculated next role Unknown with requested next role Stopped
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_lrm_rsc_state: FS_nfs1: Overwriting calculated next role Unknown with requested next role Stopped
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_Device_drive:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: FS_nfs2_last_failure_0 on Cluster-Server-2 returned 5 (not installed) instead of the expected value: 7 (not running)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Preventing FS_nfs2 from re-starting on Cluster-Server-2: operation monitor failed 'not installed' (rc=5)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: ExportFS_nfs2_last_failure_0 on Cluster-Server-2 returned 5 (not installed) instead of the expected value: 7 (not running)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Preventing ExportFS_nfs2 from re-starting on Cluster-Server-2: operation monitor failed 'not installed' (rc=5)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: FS_nfs1_last_failure_0 on Cluster-Server-2 returned 5 (not installed) instead of the expected value: 7 (not running)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Preventing FS_nfs1 from re-starting on Cluster-Server-2: operation monitor failed 'not installed' (rc=5)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_lrm_rsc_state: FS_nfs1: Overwriting calculated next role Unknown with requested next role Stopped
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: ExportFS_nfs1_last_failure_0 on Cluster-Server-2 returned 5 (not installed) instead of the expected value: 7 (not running)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Preventing ExportFS_nfs1 from re-starting on Cluster-Server-2: operation monitor failed 'not installed' (rc=5)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: unpack_lrm_rsc_state: ExportFS_nfs1: Overwriting calculated next role Unknown with requested next role Stopped
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: NFS_Server [p_NFS_Server]
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: iSCSI_Daemon [p_iSCSI_Daemon]
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: PingD [p_PingD]
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: clone_print:  Master/Slave Set: Device_drive [p_Device_drive]
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:0 active on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:0 active on Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: short_print:      Masters: [ Cluster-Server-1 ]
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: short_print:      Slaves: [ Cluster-Server-2 ]
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: native_print: LVM_drive	(ocf::nas:LVM2):	Started Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: iSCSI_iscsi2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: native_print:      Target_iscsi2	(ocf::nas:SCSTTarget):	Started Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: native_print:      Lun_iscsi2	(ocf::nas:SCSTLun):	Started Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: iSCSI_iscsi3
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: native_print:      Target_iscsi3	(ocf::nas:SCSTTarget):	Started Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: native_print:      Lun_iscsi3	(ocf::nas:SCSTLun):	Started Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: NFS_nfs1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: native_print:      FS_nfs1	(ocf::nas:Filesystem):	Stopped 
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: native_print:      ExportFS_nfs1	(ocf::nas:exportfs):	Stopped 
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: NFS_nfs2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: native_print:      FS_nfs2	(ocf::nas:Filesystem):	Started Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: native_print:      ExportFS_nfs2	(ocf::nas:exportfs):	Started Cluster-Server-1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:0: preferring current location (node=Cluster-Server-1, weight=1)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:1: preferring current location (node=Cluster-Server-2, weight=1)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_NFS_Server:0
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_NFS_Server:1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 NFS_Server instances of a possible 2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_iSCSI_Daemon:0
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_iSCSI_Daemon:1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 iSCSI_Daemon instances of a possible 2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_PingD:0
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_PingD:1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 PingD instances of a possible 2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_Device_drive:0
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_Device_drive:1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 Device_drive instances of a possible 2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:0 master score: 10150
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: master_color: Promoting p_Device_drive:0 (Master Cluster-Server-1)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:1 master score: 10100
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: master_color: Device_drive: Promoted 1 instances of a possible 1 to master
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to LVM_drive
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Target_iscsi2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Lun_iscsi2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Target_iscsi3
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Lun_iscsi3
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource FS_nfs1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for FS_nfs1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: native_color: Resource FS_nfs1 cannot run anywhere
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource ExportFS_nfs1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for ExportFS_nfs1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: native_color: Resource ExportFS_nfs1 cannot run anywhere
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to FS_nfs2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to ExportFS_nfs2
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Lun_iscsi1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Lun_iscsi1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Target_iscsi1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Target_iscsi1
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: debug: master_create_actions: Creating actions for Device_drive
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:0	(Started Cluster-Server-1)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:1	(Started Cluster-Server-2)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:0	(Started Cluster-Server-1)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:1	(Started Cluster-Server-2)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:0	(Started Cluster-Server-1)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:1	(Started Cluster-Server-2)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_Device_drive:0	(Master Cluster-Server-1)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_Device_drive:1	(Slave Cluster-Server-2)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   LVM_drive	(Started Cluster-Server-1)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Target_iscsi2	(Started Cluster-Server-1)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Lun_iscsi2	(Started Cluster-Server-1)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Target_iscsi3	(Started Cluster-Server-1)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Lun_iscsi3	(Started Cluster-Server-1)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   FS_nfs1	(Stopped)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   ExportFS_nfs1	(Stopped)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   FS_nfs2	(Started Cluster-Server-1)
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   ExportFS_nfs2	(Started Cluster-Server-1)
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_PE_SUCCESS from handle_response() received in state S_POLICY_ENGINE
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log <create_reply_adv origin="process_pe_message" t="crmd" version="3.0.6" subt="response" reference="pe_calc-dc-1347283645-191" crm_task="pe_calc" crm_sys_to="dc" crm_sys_from="pengine" crm-tgraph-in="/var/lib/pengine/pe-input-22.bz2" graph-errors="0" graph-warnings="0" config-errors="0" config-warnings="0" >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   <crm_xml >
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log     <transition_graph cluster-delay="60s" stonith-timeout="60s" failed-stop-offset="INFINITY" failed-start-offset="INFINITY" batch-limit="30" transition_id="22" />
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   </crm_xml>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log </create_reply_adv>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_INVOKE
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: unpack_graph: Unpacked transition 22: 0 actions in 0 synapses
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: info: do_te_invoke: Processing graph 22 (ref=pe_calc-dc-1347283645-191) derived from /var/lib/pengine/pe-input-22.bz2
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: print_graph: ## Empty transition graph ##
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: notice: run_graph: ==== Transition 22 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-22.bz2): Complete
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: print_graph: ## Empty transition graph ##
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: te_graph_trigger: Transition 22 is now complete
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Transition 22 status: done - <null>
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_TE_SUCCESS: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_TE_SUCCESS from notify_crmd() received in state S_TRANSITION_ENGINE
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: Starting PEngine Recheck Timer
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started PEngine Recheck Timer (I_PE_CALC:900000ms), src=354
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:27:25 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:27:25 Cluster-Server-2 pengine: [40196]: notice: process_pe_message: Transition 22: PEngine Input stored in: /var/lib/pengine/pe-input-22.bz2
Sep 10 15:27:26 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [60677] registered
Sep 10 15:27:26 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:60677] disconnected.
Sep 10 15:27:26 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:60677] is unregistered
Sep 10 15:27:26 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [60679] registered
Sep 10 15:27:26 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:60679] disconnected.
Sep 10 15:27:26 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:60679] is unregistered
Sep 10 15:27:26 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [60681] registered
Sep 10 15:27:26 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:60681] disconnected.
Sep 10 15:27:26 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:60681] is unregistered
Sep 10 15:27:26 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [60683] registered
Sep 10 15:27:26 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:60683] disconnected.
Sep 10 15:27:26 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:60683] is unregistered
Sep 10 15:27:26 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [60692] registered
Sep 10 15:27:26 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:60692] disconnected.
Sep 10 15:27:26 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:60692] is unregistered
Sep 10 15:27:26 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [60701] registered
Sep 10 15:27:26 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:60701] disconnected.
Sep 10 15:27:26 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:60701] is unregistered
Sep 10 15:27:26 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [60708] registered
Sep 10 15:27:26 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:60708] disconnected.
Sep 10 15:27:26 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:60708] is unregistered
Sep 10 15:27:26 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [60715] registered
Sep 10 15:27:26 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:60715] disconnected.
Sep 10 15:27:26 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:60715] is unregistered
Sep 10 15:27:26 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [60722] registered
Sep 10 15:27:26 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:60722] disconnected.
Sep 10 15:27:26 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:60722] is unregistered
Sep 10 15:27:26 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [60730] registered
Sep 10 15:27:26 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:60730] disconnected.
Sep 10 15:27:26 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:60730] is unregistered
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: crm_uptime: Current CPU usage is: 0s, 120000us
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: crm_compare_age: Loose: 120000 vs 520000 (usec)
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: info: do_election_count_vote: Election 13 (owner: Cluster-Server-2) lost: vote from Cluster-Server-2 (Uptime)
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: update_dc: Unset DC. Was Cluster-Server-2
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: do_election_check: Ignore election check: we not in an election
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_PENDING: [ state=S_NOT_DC cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: do_log: FSA: Input I_PENDING from do_election_count_vote() received in state S_NOT_DC
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: notice: do_state_transition: State transition S_NOT_DC -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_START
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: crm_timer_start: Started Election Trigger (I_DC_TIMEOUT:20000ms), src=89
Sep 10 15:27:26 Cluster-Server-1 cib: [48709]: debug: activateCibXml: Triggering CIB write for cib_apply_diff op
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: handle_request: Raising I_JOIN_OFFER: join-11
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_JOIN_OFFER: [ state=S_PENDING cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_REQUEST
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: info: update_dc: Set DC to Cluster-Server-2 (3.0.6)
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: do_cl_join_offer_respond: do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:26 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='master-p_Device_drive:1'] does not exist
Sep 10 15:27:26 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update -22 for master-p_Device_drive:1=(null) passed
Sep 10 15:27:26 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:27:26 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-pingd" name="pingd" value="100" />
Sep 10 15:27:26 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:27:26 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-probe_complete" name="probe_complete" value="true" />
Sep 10 15:27:26 Cluster-Server-1 cib: [48709]: debug: Forking temp process write_cib_contents
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: join_query_callback: Respond to join offer join-11
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: join_query_callback: Acknowledging Cluster-Server-2 as our DC
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: handle_request: Raising I_JOIN_RESULT: join-11
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_PENDING cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: do_cl_join_finalize_respond: Confirming join join-11: join_ack_nack
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce FS_nfs2 after complete start op (interval=0)
Sep 10 15:27:26 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 169 for pingd=100 passed
Sep 10 15:27:26 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 171 for probe_complete=true passed
Sep 10 15:27:26 Cluster-Server-1 cib: [48709]: debug: activateCibXml: Triggering CIB write for cib_replace op
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce FS_nfs2 after complete monitor op (interval=20000)
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce ExportFS_nfs1 after complete monitor op (interval=0)
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce ExportFS_nfs2 after complete start op (interval=0)
Sep 10 15:27:26 Cluster-Server-1 cib: [48709]: info: Managed write_cib_contents process 60742 exited with return code 0.
Sep 10 15:27:26 Cluster-Server-1 cib: [48709]: debug: Forking temp process write_cib_contents
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce ExportFS_nfs2 after complete monitor op (interval=10000)
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_NFS_Server:0 after complete start op (interval=0)
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_NFS_Server:0 after complete monitor op (interval=30000)
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce FS_nfs1 after complete monitor op (interval=0)
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce LVM_drive after complete monitor op (interval=0)
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Lun_iscsi1 after complete monitor op (interval=0)
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_PingD:0 after complete monitor op (interval=0)
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_PingD:0 after complete monitor op (interval=10000)
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Lun_iscsi2 after complete start op (interval=0)
Sep 10 15:27:26 Cluster-Server-1 cib: [48709]: info: Managed write_cib_contents process 60755 exited with return code 0.
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Lun_iscsi2 after complete monitor op (interval=10000)
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_Device_drive:0 after complete monitor op (interval=0)
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_Device_drive:0 after complete monitor op (interval=10000)
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_iSCSI_Daemon:0 after complete monitor op (interval=0)
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_iSCSI_Daemon:0 after complete monitor op (interval=30000)
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Target_iscsi2 after complete start op (interval=0)
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Target_iscsi2 after complete monitor op (interval=10000)
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Target_iscsi3 after complete start op (interval=0)
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Target_iscsi3 after complete monitor op (interval=10000)
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Target_iscsi1 after complete monitor op (interval=0)
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Lun_iscsi3 after complete start op (interval=0)
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Lun_iscsi3 after complete monitor op (interval=10000)
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: do_cl_join_finalize_respond: join-11: Join complete.  Sending local LRM status to Cluster-Server-2
Sep 10 15:27:26 Cluster-Server-1 attrd: [48713]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
Sep 10 15:27:26 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_Device_drive:0 (10000)
Sep 10 15:27:26 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='master-p_Device_drive:0'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[3])
Sep 10 15:27:26 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-master-p_Device_drive.0" name="master-p_Device_drive:0" value="10000" />
Sep 10 15:27:26 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: pingd (100)
Sep 10 15:27:26 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Sep 10 15:27:26 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: attrd_update_delegate: Sent update: (null)=(null) for localhost
Sep 10 15:27:26 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-probe_complete" name="probe_complete" value="true" />
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_NOT_DC: [ state=S_PENDING cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: do_log: FSA: Input I_NOT_DC from do_cl_join_finalize_respond() received in state S_PENDING
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: notice: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:27:26 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:27:26 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:27:26 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-pingd" name="pingd" value="100" />
Sep 10 15:27:26 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 173 for master-p_Device_drive:0=10000 passed
Sep 10 15:27:26 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 175 for probe_complete=true passed
Sep 10 15:27:26 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 177 for pingd=100 passed
Sep 10 15:27:26 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='master-p_Device_drive:1'] does not exist
Sep 10 15:27:26 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update -22 for master-p_Device_drive:1=(null) passed
Sep 10 15:27:26 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:27:26 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-pingd" name="pingd" value="100" />
Sep 10 15:27:26 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:27:26 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-probe_complete" name="probe_complete" value="true" />
Sep 10 15:27:26 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 180 for pingd=100 passed
Sep 10 15:27:26 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 182 for probe_complete=true passed
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: debug: activateCibXml: Triggering CIB write for cib_replace op
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_replace): 0.17.16 -> 0.18.1 (S_IDLE)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.18.1) : Non-status change
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause <diff crm_feature_set="3.0.6" >
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   <diff-removed admin_epoch="0" epoch="17" num_updates="16" >
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     <cib admin_epoch="0" epoch="17" num_updates="16" >
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       <configuration >
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         <resources >
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <group id="NFS_nfs1" __crm_diff_marker__="removed:top" >
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <primitive class="ocf" id="FS_nfs1" provider="nas" type="Filesystem" >
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <instance_attributes id="FS_nfs1-instance_attributes" >
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="FS_nfs1-instance_attributes-device" name="device" value="/dev/drive-CSD/nfs1_NFS" />
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="FS_nfs1-instance_attributes-directory" name="directory" value="/volumes/nfs1" />
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="FS_nfs1-instance_attributes-fstype" name="fstype" value="xfs" />
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="FS_nfs1-instance_attributes-force_clones" name="force_clones" value="false" />
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </instance_attributes>
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <operations >
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="FS_nfs1-start-0" interval="0" name="start" timeout="60" />
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="FS_nfs1-stop-0" interval="0" name="stop" timeout="60" />
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="FS_nfs1-monitor-20" interval="20" name="monitor" timeout="40" />
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </operations>
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <meta_attributes id="FS_nfs1-meta_attributes" >
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="FS_nfs1-meta_attributes-resource-stickiness" name="resource-stickiness" value="0" />
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </meta_attributes>
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             </primitive>
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <primitive class="ocf" id="ExportFS_nfs1" provider="nas" type="exportfs" >
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <instance_attributes id="ExportFS_nfs1-instance_attributes" >
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="ExportFS_nfs1-instance_attributes-fsid" name="fsid" value="1955f364-fb4b-11e1-b02e-000c290247c7" />
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="ExportFS_nfs1-instance_attributes-directory" name="directory" value="/volumes/nfs1" />
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="ExportFS_nfs1-instance_attributes-options" name="options" value="rw,insecure,async,no_subtree_check,root_squash,no_all_squash,anonuid=1000,anongid=100" />
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="ExportFS_nfs1-instance_attributes-clientspec" name="clientspec" value="*" />
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </instance_attributes>
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <operations >
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="ExportFS_nfs1-start-0" interval="0" name="start" timeout="40" />
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="ExportFS_nfs1-stop-0" interval="0" name="stop" timeout="10" />
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="ExportFS_nfs1-monitor-10" interval="10" name="monitor" timeout="20" />
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </operations>
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <meta_attributes id="ExportFS_nfs1-meta_attributes" >
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="ExportFS_nfs1-meta_attributes-resource-stickiness" name="resource-stickiness" value="0" />
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </meta_attributes>
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             </primitive>
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <meta_attributes id="NFS_nfs1-meta_attributes" >
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <nvpair id="NFS_nfs1-meta_attributes-target-role" name="target-role" value="Stopped" />
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             </meta_attributes>
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           </group>
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         </resources>
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         <constraints >
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <rsc_order first="LVM_drive" id="NFS_nfs1_after_LVM_drive" score="INFINITY" then="NFS_nfs1" __crm_diff_marker__="removed:top" />
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <rsc_order first="NFS_Server" id="NFS_nfs1_after_NFS_Server" score="INFINITY" then="NFS_nfs1" __crm_diff_marker__="removed:top" />
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <rsc_colocation id="NFS_nfs1_with_LVM_drive" rsc="NFS_nfs1" score="INFINITY" with-rsc="LVM_drive" __crm_diff_marker__="removed:top" />
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <rsc_colocation id="NFS_nfs1_with_NFS_Server" rsc="NFS_nfs1" score="INFINITY" with-rsc="NFS_Server" __crm_diff_marker__="removed:top" />
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         </constraints>
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       </configuration>
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     </cib>
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   </diff-removed>
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   <diff-added >
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     <cib epoch="18" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="Cluster-Server-1" update-client="crmd" cib-last-written="Mon Sep 10 15:27:25 2012" have-quorum="1" dc-uuid="Cluster-Server-2" />
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   </diff-added>
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause </diff>
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_IDLE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes are eligible to run resources.
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 303: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib_replace_notify: Replaced: 0.17.16 -> 0.18.1 from Cluster-Server-1
Sep 10 15:27:26 Cluster-Server-2 attrd: [40195]: info: do_cib_replaced: Sending full refresh
Sep 10 15:27:26 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_Device_drive:1 (10000)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_cib_replaced: Updating the CIB after a replace: DC=true
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: - <cib admin_epoch="0" epoch="17" num_updates="16" >
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -   <configuration >
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -     <resources >
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -       <group id="NFS_nfs1" __crm_diff_marker__="removed:top" >
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -         <primitive class="ocf" id="FS_nfs1" provider="nas" type="Filesystem" >
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-1: true (overwrite=true) hash_size=2
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -           <instance_attributes id="FS_nfs1-instance_attributes" >
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <nvpair id="FS_nfs1-instance_attributes-device" name="device" value="/dev/drive-CSD/nfs1_NFS" />
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-2: true (overwrite=true) hash_size=2
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <nvpair id="FS_nfs1-instance_attributes-directory" name="directory" value="/volumes/nfs1" />
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <nvpair id="FS_nfs1-instance_attributes-fstype" name="fstype" value="xfs" />
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <nvpair id="FS_nfs1-instance_attributes-force_clones" name="force_clones" value="false" />
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -           </instance_attributes>
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -           <operations >
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <op id="FS_nfs1-start-0" interval="0" name="start" timeout="60" />
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <op id="FS_nfs1-stop-0" interval="0" name="stop" timeout="60" />
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <op id="FS_nfs1-monitor-20" interval="20" name="monitor" timeout="40" />
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -           </operations>
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -           <meta_attributes id="FS_nfs1-meta_attributes" >
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <nvpair id="FS_nfs1-meta_attributes-resource-stickiness" name="resource-stickiness" value="0" />
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -           </meta_attributes>
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -         </primitive>
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -         <primitive class="ocf" id="ExportFS_nfs1" provider="nas" type="exportfs" >
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -           <instance_attributes id="ExportFS_nfs1-instance_attributes" >
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <nvpair id="ExportFS_nfs1-instance_attributes-fsid" name="fsid" value="1955f364-fb4b-11e1-b02e-000c290247c7" />
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <nvpair id="ExportFS_nfs1-instance_attributes-directory" name="directory" value="/volumes/nfs1" />
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: update_dc: Unset DC. Was Cluster-Server-2
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <nvpair id="ExportFS_nfs1-instance_attributes-options" name="options" value="rw,insecure,async,no_subtree_check,root_squash,no_all_squash,anonuid=1000,anongid=100" />
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <nvpair id="ExportFS_nfs1-instance_attributes-clientspec" name="clientspec" value="*" />
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -           </instance_attributes>
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -           <operations >
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <op id="ExportFS_nfs1-start-0" interval="0" name="start" timeout="40" />
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <op id="ExportFS_nfs1-stop-0" interval="0" name="stop" timeout="10" />
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: crm_uptime: Current CPU usage is: 0s, 520000us
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <op id="ExportFS_nfs1-monitor-10" interval="10" name="monitor" timeout="20" />
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -           </operations>
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -           <meta_attributes id="ExportFS_nfs1-meta_attributes" >
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <nvpair id="ExportFS_nfs1-meta_attributes-resource-stickiness" name="resource-stickiness" value="0" />
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -           </meta_attributes>
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -         </primitive>
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -         <meta_attributes id="NFS_nfs1-meta_attributes" >
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -           <nvpair id="NFS_nfs1-meta_attributes-target-role" name="target-role" value="Stopped" />
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -         </meta_attributes>
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -       </group>
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -     </resources>
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -     <constraints >
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -       <rsc_order first="LVM_drive" id="NFS_nfs1_after_LVM_drive" score="INFINITY" then="NFS_nfs1" __crm_diff_marker__="removed:top" />
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_election_vote: Started election 13
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -       <rsc_order first="NFS_Server" id="NFS_nfs1_after_NFS_Server" score="INFINITY" then="NFS_nfs1" __crm_diff_marker__="removed:top" />
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -       <rsc_colocation id="NFS_nfs1_with_LVM_drive" rsc="NFS_nfs1" score="INFINITY" with-rsc="LVM_drive" __crm_diff_marker__="removed:top" />
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -       <rsc_colocation id="NFS_nfs1_with_NFS_Server" rsc="NFS_nfs1" score="INFINITY" with-rsc="NFS_Server" __crm_diff_marker__="removed:top" />
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -     </constraints>
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started Election Timeout (I_ELECTION_DC:120000ms), src=358
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: -   </configuration>
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: - </cib>
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib:diff: + <cib epoch="18" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="Cluster-Server-1" update-client="crmd" cib-last-written="Mon Sep 10 15:27:25 2012" have-quorum="1" dc-uuid="Cluster-Server-2" />
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=Cluster-Server-1/cibadmin/2, version=0.18.1): ok (rc=0)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_election_count_vote: Created voted hash
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: crm_uptime: Current CPU usage is: 0s, 520000us
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_election_count_vote: Election 13 (current: 13, owner: Cluster-Server-2): Processed vote from Cluster-Server-2 (Recorded)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: crm_uptime: Current CPU usage is: 0s, 520000us
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: crm_compare_age: Win: 520000 vs 0  (usec)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_election_count_vote: Election 13 (current: 13, owner: Cluster-Server-2): Processed no-vote from Cluster-Server-1 (Recorded)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_election_check: Destroying voted hash
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_ELECTION_DC: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_election_check ]
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_ELECTION_DC from do_election_check() received in state S_ELECTION
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_START
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_te_control: The transitioner is already active
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_START
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pengine
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_START
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started Integration Timer (I_INTEGRATED:180000ms), src=360
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TAKEOVER
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: info: do_dc_takeover: Taking over DC status for this partition
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/304, version=0.18.2): ok (rc=0)
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: debug: cib_process_readwrite: We are still in R/W mode
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/306, version=0.18.4): ok (rc=0)
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/307, version=0.18.5): ok (rc=0)
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='master-p_Device_drive:1'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[3])
Sep 10 15:27:26 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-master-p_Device_drive.1" name="master-p_Device_drive:1" value="10000" />
Sep 10 15:27:26 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: pingd (100)
Sep 10 15:27:26 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:27:26 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-probe_complete" name="probe_complete" value="true" />
Sep 10 15:27:26 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 168 for master-p_Device_drive:1=10000 passed
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//cluster_property_set//nvpair[@name='dc-version'] (/cib/configuration/crm_config/cluster_property_set/nvpair[1])
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: find_nvpair_attr_delegate: Match <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff" />
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:27:26 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-pingd" name="pingd" value="100" />
Sep 10 15:27:26 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 170 for probe_complete=true passed
Sep 10 15:27:26 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 172 for pingd=100 passed
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/309, version=0.18.9): ok (rc=0)
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//cluster_property_set//nvpair[@name='cluster-infrastructure'] (/cib/configuration/crm_config/cluster_property_set/nvpair[2])
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: debug: Forking temp process write_cib_contents
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: find_nvpair_attr_delegate: Match <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="cman" />
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_OFFER_ALL
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: initialize_join: join-11: Initializing join data (flag=true)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: join_make_offer: join-11: Sending offer to Cluster-Server-1
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: join_make_offer: join-11: Sending offer to Cluster-Server-2
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: info: do_dc_join_offer_all: join-11: Waiting on 2 outstanding join acks
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke_callback: Discarding PE request in state: S_INTEGRATION
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/311, version=0.18.10): ok (rc=0)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Call 312 : Parsing CIB options
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Shutdown escalation occurs after: 1200000ms
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Checking for expired actions every 900000ms
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: handle_request: Raising I_JOIN_OFFER: join-11
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_OFFER: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_REQUEST
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: info: update_dc: Set DC to Cluster-Server-2 (3.0.6)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_cl_join_offer_respond: do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: join_query_callback: Respond to join offer join-11
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: join_query_callback: Acknowledging Cluster-Server-2 as our DC
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_REQ
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: Processing req from Cluster-Server-2
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: join-11: Welcoming node Cluster-Server-2 (ref join_request-crmd-1347283646-195)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: 1 nodes have been integrated into join-11
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: join-11: Still waiting on 1 outstanding offers
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: Managed write_cib_contents process 6237 exited with return code 0.
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_REQ
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: Processing req from Cluster-Server-1
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: join-11: Welcoming node Cluster-Server-1 (ref join_request-crmd-1347283646-40)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: 2 nodes have been integrated into join-11
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: check_join_state: join-11: Integration of 2 peers complete: do_dc_join_filter_offer
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_INTEGRATED: [ state=S_INTEGRATION cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes responded to the join offer.
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_START
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started Finalization Timer (I_ELECTION:1800000ms), src=364
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_FINALIZE
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_finalize: Finializing join-11 for 2 clients
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: info: do_dc_join_finalize: join-11: Syncing the CIB from Cluster-Server-2 to the rest of the cluster
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: debug: sync_our_cib: Syncing CIB to all peers
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/314, version=0.18.12): ok (rc=0)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by finalize_sync_callback in state: S_FINALIZE_JOIN
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: check_join_state: join-11: Still waiting on 2 integrated nodes
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: finalize_sync_callback: Notifying 2 clients of join-11 results
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: finalize_join_for: join-11: ACK'ing join request from Cluster-Server-1, state member
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: finalize_join_for: join-11: ACK'ing join request from Cluster-Server-2, state member
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: handle_request: Raising I_JOIN_RESULT: join-11
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_cl_join_finalize_respond: Confirming join join-11: join_ack_nack
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce Target_iscsi3 after complete monitor op (interval=0)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce FS_nfs2 after complete monitor op (interval=0)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce ExportFS_nfs2 after complete monitor op (interval=0)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce FS_nfs1 after complete monitor op (interval=0)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_NFS_Server:1 after complete start op (interval=0)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_NFS_Server:1 after complete monitor op (interval=30000)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce LVM_drive after complete monitor op (interval=0)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce Lun_iscsi1 after complete monitor op (interval=0)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce Lun_iscsi2 after complete monitor op (interval=0)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_PingD:1 after complete monitor op (interval=0)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_PingD:1 after complete monitor op (interval=10000)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_Device_drive:1 after complete monitor op (interval=0)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_Device_drive:1 after complete monitor op (interval=20000)
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/315, version=0.18.13): ok (rc=0)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce Target_iscsi1 after complete monitor op (interval=0)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce Target_iscsi2 after complete monitor op (interval=0)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_iSCSI_Daemon:1 after complete monitor op (interval=0)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_iSCSI_Daemon:1 after complete monitor op (interval=30000)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce ExportFS_nfs1 after complete monitor op (interval=0)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce Lun_iscsi3 after complete monitor op (interval=0)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_cl_join_finalize_respond: join-11: Join complete.  Sending local LRM status to Cluster-Server-2
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_ack: Ignoring op=join_ack_nack message from Cluster-Server-2
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: info: do_dc_join_ack: join-11: Updating node state to member for Cluster-Server-2
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: info: erase_status_tag: Deleting xpath: //node_state[@uname='Cluster-Server-2']/lrm
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_ack: join-11: Registered callback for LRM update 318
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/316, version=0.18.14): ok (rc=0)
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-2']/lrm (/cib/status/node_state[2]/lrm)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-2']/lrm (origin=local/crmd/317, version=0.18.15): ok (rc=0)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: erase_xpath_callback: Deletion of "//node_state[@uname='Cluster-Server-2']/lrm": ok (rc=0)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: join_update_complete_callback: Join update 318 complete
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by join_update_complete_callback in state: S_FINALIZE_JOIN
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: check_join_state: join-11: Still waiting on 1 finalized nodes
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: info: do_dc_join_ack: join-11: Updating node state to member for Cluster-Server-1
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: info: erase_status_tag: Deleting xpath: //node_state[@uname='Cluster-Server-1']/lrm
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_ack: join-11: Registered callback for LRM update 320
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-1']/lrm (/cib/status/node_state[1]/lrm)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-1']/lrm (origin=local/crmd/319, version=0.18.20): ok (rc=0)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: erase_xpath_callback: Deletion of "//node_state[@uname='Cluster-Server-1']/lrm": ok (rc=0)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: join_update_complete_callback: Join update 320 complete
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by join_update_complete_callback in state: S_FINALIZE_JOIN
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: check_join_state: join-11 complete: join_update_complete_callback
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_FINALIZED: [ state=S_FINALIZE_JOIN cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-1: true (overwrite=true) hash_size=2
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-2: true (overwrite=true) hash_size=2
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes are eligible to run resources.
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_FINAL
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_final: Ensuring DC, quorum and node attributes are up-to-date
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: attrd_update_delegate: Sent update: (null)=(null) for localhost
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: crm_update_quorum: Updating quorum status to true (call=323)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_CANCEL
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_te_invoke: Cancelling the transition: inactive
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: do_te_invoke:162 - Triggered transition abort (complete=1) : Peer Cancelled
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 324: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.18.21 -> 0.18.22 (S_POLICY_ENGINE)
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/321, version=0.18.22): ok (rc=0)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.18.22 -> 0.18.23 (S_POLICY_ENGINE)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.18.23 -> 0.18.24 (S_POLICY_ENGINE)
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/323, version=0.18.24): ok (rc=0)
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='master-p_Device_drive:0'] does not exist
Sep 10 15:27:26 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update -22 for master-p_Device_drive:0=(null) passed
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:27:26 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-pingd" name="pingd" value="100" />
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.18.24 -> 0.18.25 (S_POLICY_ENGINE)
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:27:26 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-probe_complete" name="probe_complete" value="true" />
Sep 10 15:27:26 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 175 for pingd=100 passed
Sep 10 15:27:26 Cluster-Server-2 attrd: [40195]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
Sep 10 15:27:26 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_Device_drive:1 (10000)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.18.25 -> 0.18.26 (S_POLICY_ENGINE)
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='master-p_Device_drive:1'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[3])
Sep 10 15:27:26 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-master-p_Device_drive.1" name="master-p_Device_drive:1" value="10000" />
Sep 10 15:27:26 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: pingd (100)
Sep 10 15:27:26 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:27:26 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-probe_complete" name="probe_complete" value="true" />
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: unpack_config: Startup probes: enabled
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH timeout: 60000
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH of failed nodes is disabled
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Stop all active resources: false
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Default stickiness: 0
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: notice: unpack_config: On loss of CCM Quorum: Ignore
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: unpack_domains: Unpacking domains
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-1 is online
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-2 is online
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: process_orphan_resource: Detected orphan resource ExportFS_nfs1 on Cluster-Server-1
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: create_fake_resource: Orphan resource <primitive id="ExportFS_nfs1" type="exportfs" class="ocf" provider="nas" />
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: process_orphan_resource: Detected orphan resource FS_nfs1 on Cluster-Server-1
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: create_fake_resource: Orphan resource <primitive id="FS_nfs1" type="Filesystem" class="ocf" provider="nas" />
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: LVM_drive_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource LVM_drive active on Cluster-Server-1
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: process_orphan_resource: Detected orphan resource Lun_iscsi1 on Cluster-Server-1
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: create_fake_resource: Orphan resource <primitive id="Lun_iscsi1" type="SCSTLun" class="ocf" provider="nas" />
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_Device_drive:0_last_failure_0 on Cluster-Server-1 returned 8 (master) instead of the expected value: 7 (not running)
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Operation monitor found resource p_Device_drive:0 active in master mode on Cluster-Server-1
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: process_orphan_resource: Detected orphan resource Target_iscsi1 on Cluster-Server-1
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: create_fake_resource: Orphan resource <primitive id="Target_iscsi1" type="SCSTTarget" class="ocf" provider="nas" />
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: FS_nfs2_last_failure_0 on Cluster-Server-2 returned 5 (not installed) instead of the expected value: 7 (not running)
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Preventing FS_nfs2 from re-starting on Cluster-Server-2: operation monitor failed 'not installed' (rc=5)
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: ExportFS_nfs2_last_failure_0 on Cluster-Server-2 returned 5 (not installed) instead of the expected value: 7 (not running)
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Preventing ExportFS_nfs2 from re-starting on Cluster-Server-2: operation monitor failed 'not installed' (rc=5)
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: FS_nfs1_last_failure_0 on Cluster-Server-2 returned 5 (not installed) instead of the expected value: 7 (not running)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke_callback: Invoking the PE: query=324, ref=pe_calc-dc-1347283646-199, seq=312, quorate=1
Sep 10 15:27:26 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:27:26 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-pingd" name="pingd" value="100" />
Sep 10 15:27:26 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 177 for probe_complete=true passed
Sep 10 15:27:26 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 179 for master-p_Device_drive:1=10000 passed
Sep 10 15:27:26 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 181 for probe_complete=true passed
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.18.26 -> 0.18.27 (S_POLICY_ENGINE)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.18.27 -> 0.18.28 (S_POLICY_ENGINE)
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Preventing FS_nfs1 from re-starting on Cluster-Server-2: operation monitor failed 'not installed' (rc=5)
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_Device_drive:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: ExportFS_nfs1_last_failure_0 on Cluster-Server-2 returned 5 (not installed) instead of the expected value: 7 (not running)
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Preventing ExportFS_nfs1 from re-starting on Cluster-Server-2: operation monitor failed 'not installed' (rc=5)
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: NFS_Server [p_NFS_Server]
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: iSCSI_Daemon [p_iSCSI_Daemon]
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: PingD [p_PingD]
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: clone_print:  Master/Slave Set: Device_drive [p_Device_drive]
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:0 active on Cluster-Server-1
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:0 active on Cluster-Server-1
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: short_print:      Masters: [ Cluster-Server-1 ]
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: short_print:      Slaves: [ Cluster-Server-2 ]
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: native_print: LVM_drive	(ocf::nas:LVM2):	Started Cluster-Server-1
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: iSCSI_iscsi2
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: native_print:      Target_iscsi2	(ocf::nas:SCSTTarget):	Started Cluster-Server-1
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: native_print:      Lun_iscsi2	(ocf::nas:SCSTLun):	Started Cluster-Server-1
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: iSCSI_iscsi3
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: native_print:      Target_iscsi3	(ocf::nas:SCSTTarget):	Started Cluster-Server-1
Sep 10 15:27:26 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 183 for pingd=100 passed
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: native_print:      Lun_iscsi3	(ocf::nas:SCSTLun):	Started Cluster-Server-1
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: NFS_nfs2
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: native_print:      FS_nfs2	(ocf::nas:Filesystem):	Started Cluster-Server-1
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: native_print:      ExportFS_nfs2	(ocf::nas:exportfs):	Started Cluster-Server-1
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.18.28 -> 0.18.29 (S_POLICY_ENGINE)
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:0: preferring current location (node=Cluster-Server-1, weight=1)
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:1: preferring current location (node=Cluster-Server-2, weight=1)
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_NFS_Server:0
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_NFS_Server:1
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 NFS_Server instances of a possible 2
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_iSCSI_Daemon:0
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_iSCSI_Daemon:1
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 iSCSI_Daemon instances of a possible 2
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_PingD:0
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_PingD:1
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 PingD instances of a possible 2
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_Device_drive:0
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_Device_drive:1
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 Device_drive instances of a possible 2
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:0 master score: 10150
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: master_color: Promoting p_Device_drive:0 (Master Cluster-Server-1)
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:1 master score: 10100
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: master_color: Device_drive: Promoted 1 instances of a possible 1 to master
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to LVM_drive
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Target_iscsi2
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Lun_iscsi2
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Target_iscsi3
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Lun_iscsi3
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to FS_nfs2
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to ExportFS_nfs2
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource ExportFS_nfs1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for ExportFS_nfs1
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource FS_nfs1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for FS_nfs1
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Lun_iscsi1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Lun_iscsi1
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Target_iscsi1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Target_iscsi1
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: debug: master_create_actions: Creating actions for Device_drive
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:0	(Started Cluster-Server-1)
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:1	(Started Cluster-Server-2)
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:0	(Started Cluster-Server-1)
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:1	(Started Cluster-Server-2)
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:0	(Started Cluster-Server-1)
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:1	(Started Cluster-Server-2)
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_Device_drive:0	(Master Cluster-Server-1)
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_Device_drive:1	(Slave Cluster-Server-2)
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   LVM_drive	(Started Cluster-Server-1)
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Target_iscsi2	(Started Cluster-Server-1)
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Lun_iscsi2	(Started Cluster-Server-1)
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Target_iscsi3	(Started Cluster-Server-1)
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Lun_iscsi3	(Started Cluster-Server-1)
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   FS_nfs2	(Started Cluster-Server-1)
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   ExportFS_nfs2	(Started Cluster-Server-1)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_PE_SUCCESS from handle_response() received in state S_POLICY_ENGINE
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log <create_reply_adv origin="process_pe_message" t="crmd" version="3.0.6" subt="response" reference="pe_calc-dc-1347283646-199" crm_task="pe_calc" crm_sys_to="dc" crm_sys_from="pengine" crm-tgraph-in="/var/lib/pengine/pe-input-23.bz2" graph-errors="0" graph-warnings="0" config-errors="0" config-warnings="0" >
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   <crm_xml >
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log     <transition_graph cluster-delay="60s" stonith-timeout="60s" failed-stop-offset="INFINITY" failed-start-offset="INFINITY" batch-limit="30" transition_id="23" />
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   </crm_xml>
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log </create_reply_adv>
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_INVOKE
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: unpack_graph: Unpacked transition 23: 0 actions in 0 synapses
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: info: do_te_invoke: Processing graph 23 (ref=pe_calc-dc-1347283646-199) derived from /var/lib/pengine/pe-input-23.bz2
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: print_graph: ## Empty transition graph ##
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: notice: run_graph: ==== Transition 23 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-23.bz2): Complete
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: print_graph: ## Empty transition graph ##
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: te_graph_trigger: Transition 23 is now complete
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Transition 23 status: done - <null>
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_TE_SUCCESS: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_TE_SUCCESS from notify_crmd() received in state S_TRANSITION_ENGINE
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: Starting PEngine Recheck Timer
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started PEngine Recheck Timer (I_PE_CALC:900000ms), src=374
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:27:26 Cluster-Server-2 pengine: [40196]: notice: process_pe_message: Transition 23: PEngine Input stored in: /var/lib/pengine/pe-input-23.bz2
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.18.29 -> 0.18.30 (S_IDLE)
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:26 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.18.30 -> 0.18.31 (S_IDLE)
Sep 10 15:27:28 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_Device_drive:1 monitor[11] (pid 6276)
drbd(p_Device_drive:1)[6276]:	2012/09/10_15:27:28 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:27:28 Cluster-Server-2 crm_attribute: [6306]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:27:28 Cluster-Server-2 crm_attribute: [6306]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:27:28 Cluster-Server-2 crm_attribute: [6306]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:27:28 Cluster-Server-2 crm_attribute: [6306]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:27:28 Cluster-Server-2 crm_attribute: [6306]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:27:28 Cluster-Server-2 crm_attribute: [6306]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:27:28 Cluster-Server-2 crm_attribute: [6306]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:27:28 Cluster-Server-2 crm_attribute: [6306]: info: determine_host: Mapped Cluster-Server-2 to Cluster-Server-2
Sep 10 15:27:28 Cluster-Server-2 crm_attribute: [6306]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:27:28 Cluster-Server-2 crm_attribute: [6306]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:27:28 Cluster-Server-2 crm_attribute: [6306]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:1=10000 for Cluster-Server-2
Sep 10 15:27:28 Cluster-Server-2 crm_attribute: [6306]: info: attrd_lazy_update: Updated master-p_Device_drive:1=10000 for Cluster-Server-2
Sep 10 15:27:28 Cluster-Server-2 crm_attribute: [6306]: info: main: Update master-p_Device_drive:1=10000 sent via attrd
Sep 10 15:27:28 Cluster-Server-2 crm_attribute: [6306]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:27:28 Cluster-Server-2 crm_attribute: [6306]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:27:28 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:1=10000
Sep 10 15:27:28 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:1)[6276]:	2012/09/10_15:27:28 DEBUG: drive: Exit code 0
drbd(p_Device_drive:1)[6276]:	2012/09/10_15:27:28 DEBUG: drive: Command output: 
Sep 10 15:27:28 Cluster-Server-2 lrmd: [40194]: debug: RA output: (p_Device_drive:1:monitor:stdout) 

Sep 10 15:27:28 Cluster-Server-2 lrmd: [40194]: info: operation monitor[11] on p_Device_drive:1 for client 40197: pid 6276 exited with return code 0
Sep 10 15:27:29 Cluster-Server-1 lrmd: [48712]: debug: rsc:FS_nfs2 monitor[43] (pid 60787)
Sep 10 15:27:29 Cluster-Server-1 lrmd: [48712]: info: operation monitor[43] on FS_nfs2 for client 48715: pid 60787 exited with return code 0
Sep 10 15:27:29 Cluster-Server-1 lrmd: [48712]: debug: rsc:ExportFS_nfs2 monitor[45] (pid 60819)
exportfs(ExportFS_nfs2)[60819]:	2012/09/10_15:27:29 INFO: Directory /volumes/nfs2 is exported to * (started).
exportfs(ExportFS_nfs2)[60819]:	2012/09/10_15:27:29 INFO: Directory /volumes/nfs2 is exported to * (started).
Sep 10 15:27:29 Cluster-Server-1 lrmd: [48712]: info: operation monitor[45] on ExportFS_nfs2 for client 48715: pid 60819 exited with return code 0
Sep 10 15:27:29 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_Device_drive:0 monitor[11] (pid 60832)
drbd(p_Device_drive:0)[60832]:	2012/09/10_15:27:29 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:27:29 Cluster-Server-1 crm_attribute: [60862]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:27:29 Cluster-Server-1 crm_attribute: [60862]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:27:29 Cluster-Server-1 crm_attribute: [60862]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:27:29 Cluster-Server-1 crm_attribute: [60862]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:27:29 Cluster-Server-1 crm_attribute: [60862]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:27:29 Cluster-Server-1 crm_attribute: [60862]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:27:29 Cluster-Server-1 crm_attribute: [60862]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:27:29 Cluster-Server-1 crm_attribute: [60862]: info: determine_host: Mapped Cluster-Server-1 to Cluster-Server-1
Sep 10 15:27:29 Cluster-Server-1 crm_attribute: [60862]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:27:29 Cluster-Server-1 crm_attribute: [60862]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:27:29 Cluster-Server-1 crm_attribute: [60862]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:27:29 Cluster-Server-1 crm_attribute: [60862]: info: attrd_lazy_update: Updated master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:27:29 Cluster-Server-1 crm_attribute: [60862]: info: main: Update master-p_Device_drive:0=10000 sent via attrd
Sep 10 15:27:29 Cluster-Server-1 crm_attribute: [60862]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:27:29 Cluster-Server-1 crm_attribute: [60862]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:27:29 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:0=10000
Sep 10 15:27:29 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:0)[60832]:	2012/09/10_15:27:29 DEBUG: drive: Exit code 0
drbd(p_Device_drive:0)[60832]:	2012/09/10_15:27:29 DEBUG: drive: Command output: 
Sep 10 15:27:29 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_Device_drive:0:monitor:stdout) 

Sep 10 15:27:29 Cluster-Server-1 lrmd: [48712]: info: operation monitor[11] on p_Device_drive:0 for client 48715: pid 60832 exited with return code 8
Sep 10 15:27:34 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_PingD:0 monitor[6] (pid 60878)
Sep 10 15:27:34 Cluster-Server-1 lrmd: [48712]: debug: rsc:Target_iscsi3 monitor[31] (pid 61028)
Sep 10 15:27:34 Cluster-Server-1 lrmd: [48712]: debug: rsc:Lun_iscsi3 monitor[33] (pid 61029)
SCSTTarget(Target_iscsi3)[61028]:	2012/09/10_15:27:34 DEBUG: Target_iscsi3 monitor : 0
Sep 10 15:27:34 Cluster-Server-1 lrmd: [48712]: info: operation monitor[31] on Target_iscsi3 for client 48715: pid 61028 exited with return code 0
SCSTLun(Lun_iscsi3)[61029]:	2012/09/10_15:27:34 INFO: Lun_iscsi3 monitor : 0
SCSTLun(Lun_iscsi3)[61029]:	2012/09/10_15:27:34 INFO: Lun_iscsi3 monitor : 0
Sep 10 15:27:34 Cluster-Server-1 lrmd: [48712]: info: operation monitor[33] on Lun_iscsi3 for client 48715: pid 61029 exited with return code 0
Sep 10 15:27:34 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_PingD:1 monitor[6] (pid 6669)
Sep 10 15:27:35 Cluster-Server-1 lrmd: [48712]: debug: rsc:Target_iscsi2 monitor[21] (pid 61104)
Sep 10 15:27:35 Cluster-Server-1 lrmd: [48712]: debug: rsc:Lun_iscsi2 monitor[23] (pid 61105)
SCSTTarget(Target_iscsi2)[61104]:	2012/09/10_15:27:35 DEBUG: Target_iscsi2 monitor : 0
Sep 10 15:27:35 Cluster-Server-1 lrmd: [48712]: info: operation monitor[21] on Target_iscsi2 for client 48715: pid 61104 exited with return code 0
SCSTLun(Lun_iscsi2)[61105]:	2012/09/10_15:27:35 INFO: Lun_iscsi2 monitor : 0
SCSTLun(Lun_iscsi2)[61105]:	2012/09/10_15:27:35 INFO: Lun_iscsi2 monitor : 0
Sep 10 15:27:35 Cluster-Server-1 lrmd: [48712]: info: operation monitor[23] on Lun_iscsi2 for client 48715: pid 61105 exited with return code 0
Sep 10 15:27:36 Cluster-Server-1 attrd_updater: [61120]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:27:36 Cluster-Server-1 attrd_updater: [61120]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:27:36 Cluster-Server-1 attrd_updater: [61120]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:27:36 Cluster-Server-1 attrd_updater: [61120]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:27:36 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:27:36 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:27:36 Cluster-Server-1 lrmd: [48712]: info: operation monitor[6] on p_PingD:0 for client 48715: pid 60878 exited with return code 0
Sep 10 15:27:36 Cluster-Server-2 attrd_updater: [7040]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:27:36 Cluster-Server-2 attrd_updater: [7040]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:27:36 Cluster-Server-2 attrd_updater: [7040]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:27:36 Cluster-Server-2 attrd_updater: [7040]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:27:36 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:27:36 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:27:36 Cluster-Server-2 lrmd: [40194]: info: operation monitor[6] on p_PingD:1 for client 40197: pid 6669 exited with return code 0
Sep 10 15:27:39 Cluster-Server-1 lrmd: [48712]: debug: rsc:ExportFS_nfs2 monitor[45] (pid 61121)
exportfs(ExportFS_nfs2)[61121]:	2012/09/10_15:27:39 INFO: Directory /volumes/nfs2 is exported to * (started).
exportfs(ExportFS_nfs2)[61121]:	2012/09/10_15:27:39 INFO: Directory /volumes/nfs2 is exported to * (started).
Sep 10 15:27:39 Cluster-Server-1 lrmd: [48712]: info: operation monitor[45] on ExportFS_nfs2 for client 48715: pid 61121 exited with return code 0
Sep 10 15:27:39 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_Device_drive:0 monitor[11] (pid 61132)
drbd(p_Device_drive:0)[61132]:	2012/09/10_15:27:39 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:27:39 Cluster-Server-1 crm_attribute: [61162]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:27:39 Cluster-Server-1 crm_attribute: [61162]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:27:39 Cluster-Server-1 crm_attribute: [61162]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:27:39 Cluster-Server-1 crm_attribute: [61162]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:27:39 Cluster-Server-1 crm_attribute: [61162]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:27:39 Cluster-Server-1 crm_attribute: [61162]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:27:39 Cluster-Server-1 crm_attribute: [61162]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:27:39 Cluster-Server-1 crm_attribute: [61162]: info: determine_host: Mapped Cluster-Server-1 to Cluster-Server-1
Sep 10 15:27:39 Cluster-Server-1 crm_attribute: [61162]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:27:39 Cluster-Server-1 crm_attribute: [61162]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:27:39 Cluster-Server-1 crm_attribute: [61162]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:27:39 Cluster-Server-1 crm_attribute: [61162]: info: attrd_lazy_update: Updated master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:27:39 Cluster-Server-1 crm_attribute: [61162]: info: main: Update master-p_Device_drive:0=10000 sent via attrd
Sep 10 15:27:39 Cluster-Server-1 crm_attribute: [61162]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:27:39 Cluster-Server-1 crm_attribute: [61162]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:27:39 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:0=10000
Sep 10 15:27:39 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:0)[61132]:	2012/09/10_15:27:39 DEBUG: drive: Exit code 0
drbd(p_Device_drive:0)[61132]:	2012/09/10_15:27:39 DEBUG: drive: Command output: 
Sep 10 15:27:39 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_Device_drive:0:monitor:stdout) 

Sep 10 15:27:39 Cluster-Server-1 lrmd: [48712]: info: operation monitor[11] on p_Device_drive:0 for client 48715: pid 61132 exited with return code 8
Sep 10 15:27:44 Cluster-Server-1 lrmd: [48712]: debug: rsc:Target_iscsi3 monitor[31] (pid 61471)
Sep 10 15:27:44 Cluster-Server-1 lrmd: [48712]: debug: rsc:Lun_iscsi3 monitor[33] (pid 61472)
SCSTTarget(Target_iscsi3)[61471]:	2012/09/10_15:27:44 DEBUG: Target_iscsi3 monitor : 0
Sep 10 15:27:44 Cluster-Server-1 lrmd: [48712]: info: operation monitor[31] on Target_iscsi3 for client 48715: pid 61471 exited with return code 0
SCSTLun(Lun_iscsi3)[61472]:	2012/09/10_15:27:44 INFO: Lun_iscsi3 monitor : 0
SCSTLun(Lun_iscsi3)[61472]:	2012/09/10_15:27:44 INFO: Lun_iscsi3 monitor : 0
Sep 10 15:27:44 Cluster-Server-1 lrmd: [48712]: info: operation monitor[33] on Lun_iscsi3 for client 48715: pid 61472 exited with return code 0
Sep 10 15:27:45 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_NFS_Server:0 monitor[8] (pid 61490)
Sep 10 15:27:45 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_iSCSI_Daemon:0 monitor[5] (pid 61491)
Sep 10 15:27:45 Cluster-Server-1 lrmd: [48712]: info: operation monitor[8] on p_NFS_Server:0 for client 48715: pid 61490 exited with return code 0
Sep 10 15:27:45 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_NFS_Server:0:monitor:stdout) nfsd running

Sep 10 15:27:45 Cluster-Server-1 lrmd: [48712]: info: operation monitor[5] on p_iSCSI_Daemon:0 for client 48715: pid 61491 exited with return code 0
Sep 10 15:27:45 Cluster-Server-1 lrmd: [48712]: debug: rsc:Target_iscsi2 monitor[21] (pid 61496)
Sep 10 15:27:45 Cluster-Server-1 lrmd: [48712]: debug: rsc:Lun_iscsi2 monitor[23] (pid 61497)
SCSTTarget(Target_iscsi2)[61496]:	2012/09/10_15:27:45 DEBUG: Target_iscsi2 monitor : 0
Sep 10 15:27:45 Cluster-Server-1 lrmd: [48712]: info: operation monitor[21] on Target_iscsi2 for client 48715: pid 61496 exited with return code 0
SCSTLun(Lun_iscsi2)[61497]:	2012/09/10_15:27:45 INFO: Lun_iscsi2 monitor : 0
SCSTLun(Lun_iscsi2)[61497]:	2012/09/10_15:27:45 INFO: Lun_iscsi2 monitor : 0
Sep 10 15:27:45 Cluster-Server-1 lrmd: [48712]: info: operation monitor[23] on Lun_iscsi2 for client 48715: pid 61497 exited with return code 0
Sep 10 15:27:45 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_NFS_Server:1 monitor[8] (pid 7640)
Sep 10 15:27:45 Cluster-Server-2 lrmd: [40194]: info: operation monitor[8] on p_NFS_Server:1 for client 40197: pid 7640 exited with return code 0
Sep 10 15:27:45 Cluster-Server-2 lrmd: [40194]: debug: RA output: (p_NFS_Server:1:monitor:stdout) nfsd running

Sep 10 15:27:45 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_iSCSI_Daemon:1 monitor[5] (pid 7667)
Sep 10 15:27:45 Cluster-Server-2 lrmd: [40194]: info: operation monitor[5] on p_iSCSI_Daemon:1 for client 40197: pid 7667 exited with return code 0
Sep 10 15:27:46 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_PingD:0 monitor[6] (pid 61560)
Sep 10 15:27:46 Cluster-Server-1 cib: [48709]: info: apply_xml_diff: Digest mis-match: expected 91853bf8a4acb53804b862712b09eb6a, calculated b2384384dcdaa2517c6d1fc95672d25a
Sep 10 15:27:46 Cluster-Server-1 cib: [48709]: notice: cib_process_diff: Diff 0.18.31 -> 0.19.1 not applied to 0.18.31: Failed application of an update diff
Sep 10 15:27:46 Cluster-Server-1 cib: [48709]: info: cib_server_process_diff: Requesting re-sync from peer
Sep 10 15:27:46 Cluster-Server-1 cib: [48709]: debug: activateCibXml: Triggering CIB write for cib_replace op
Sep 10 15:27:46 Cluster-Server-1 cib: [48709]: info: cib_replace_notify: Replaced: -1.-1.-1 -> 0.19.1 from Cluster-Server-2
Sep 10 15:27:46 Cluster-Server-1 crmd: [48715]: debug: do_cib_replaced: Updating the CIB after a replace: DC=false
Sep 10 15:27:46 Cluster-Server-1 attrd: [48713]: info: do_cib_replaced: Sending full refresh
Sep 10 15:27:46 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_Device_drive:0 (10000)
Sep 10 15:27:46 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='master-p_Device_drive:0'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[3])
Sep 10 15:27:46 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-master-p_Device_drive.0" name="master-p_Device_drive:0" value="10000" />
Sep 10 15:27:46 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: pingd (100)
Sep 10 15:27:46 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Sep 10 15:27:46 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:27:46 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-probe_complete" name="probe_complete" value="true" />
Sep 10 15:27:46 Cluster-Server-1 cib: [48709]: debug: Forking temp process write_cib_contents
Sep 10 15:27:46 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:27:46 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-pingd" name="pingd" value="100" />
Sep 10 15:27:46 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 184 for master-p_Device_drive:0=10000 passed
Sep 10 15:27:46 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 186 for probe_complete=true passed
Sep 10 15:27:46 Cluster-Server-1 cib: [61587]: ERROR: validate_cib_digest: Digest comparision failed: expected 915815b4e68298e50146ce8cc380b8d9 (/var/lib/heartbeat/crm/cib.4Fs3XB), calculated 0121bf5cffe4dab4f888959f1baf7322
Sep 10 15:27:46 Cluster-Server-1 cib: [61587]: ERROR: retrieveCib: Checksum of /var/lib/heartbeat/crm/cib.CcXphf failed!  Configuration contents ignored!
Sep 10 15:27:46 Cluster-Server-1 cib: [61587]: ERROR: retrieveCib: Usually this is caused by manual changes, please refer to http://clusterlabs.org/wiki/FAQ#cib_changes_detected
Sep 10 15:27:46 Cluster-Server-1 cib: [61587]: ERROR: crm_abort: write_cib_contents: Triggered fatal assert at io.c:662 : retrieveCib(tmp1, tmp2, FALSE) != NULL
Sep 10 15:27:46 Cluster-Server-1 crmd: [48715]: debug: cancel_op: Cancelling op 33 for Lun_iscsi3 (Lun_iscsi3:33)
Sep 10 15:27:46 Cluster-Server-1 lrmd: [48712]: info: cancel_op: operation monitor[33] on Lun_iscsi3 for client 48715, its parameters: handler=[vdisk_blockio] CRM_meta_timeout=[20000] CRM_meta_name=[monitor] target_iqn=[iqn.2005-07.com.example:vdisk.iscsi3] path=[/dev/drive-CSD/iscsi3_iSCSI] crm_feature_set=[3.0.6] CRM_meta_interval=[10000] lun=[0] device_name=[iscsi3]  cancelled
Sep 10 15:27:46 Cluster-Server-1 lrmd: [48712]: debug: on_msg_cancel_op: operation 33 cancelled
Sep 10 15:27:46 Cluster-Server-1 crmd: [48715]: debug: cancel_op: Op 33 for Lun_iscsi3 (Lun_iscsi3:33): cancelled
Sep 10 15:27:46 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=85:24:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=Lun_iscsi3_stop_0
Sep 10 15:27:46 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation stop[50] on Lun_iscsi3 for client 48715, its parameters: crm_feature_set=[3.0.6] CRM_meta_name=[stop] CRM_meta_timeout=[240000]  to the operation list.
Sep 10 15:27:46 Cluster-Server-1 lrmd: [48712]: info: rsc:Lun_iscsi3 stop[50] (pid 61589)
Sep 10 15:27:46 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation Lun_iscsi3_monitor_10000 (call=33, status=1, cib-update=0, confirmed=true) Cancelled
Sep 10 15:27:46 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending monitor op to history for 'Lun_iscsi3'
Sep 10 15:27:46 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 188 for pingd=100 passed
Sep 10 15:27:46 Cluster-Server-1 cib: [48709]: WARN: Managed write_cib_contents process 61587 killed by signal 6 [SIGABRT - Abort].
Sep 10 15:27:46 Cluster-Server-1 cib: [48709]: ERROR: Managed write_cib_contents process 61587 dumped core
Sep 10 15:27:46 Cluster-Server-1 cib: [48709]: ERROR: cib_diskwrite_complete: Disk write failed: status=134, signo=6, exitcode=0
Sep 10 15:27:46 Cluster-Server-1 cib: [48709]: ERROR: cib_diskwrite_complete: Disabling disk writes after write failure
SCSTLun(Lun_iscsi3)[61589]:	2012/09/10_15:27:46 INFO: Stopping lun 0 on target iqn.2005-07.com.example:vdisk.iscsi3
SCSTLun(Lun_iscsi3)[61589]:	2012/09/10_15:27:46 INFO: Stopping lun 0 on target iqn.2005-07.com.example:vdisk.iscsi3
SCSTLun(Lun_iscsi3)[61589]:	2012/09/10_15:27:46 INFO: Disabling target iqn.2005-07.com.example:vdisk.iscsi3
SCSTLun(Lun_iscsi3)[61589]:	2012/09/10_15:27:46 INFO: Disabling target iqn.2005-07.com.example:vdisk.iscsi3
SCSTLun(Lun_iscsi3)[61589]:	2012/09/10_15:27:46 INFO: Removing LUN 0, device iscsi3, target iqn.2005-07.com.example:vdisk.iscsi3
SCSTLun(Lun_iscsi3)[61589]:	2012/09/10_15:27:46 INFO: Removing LUN 0, device iscsi3, target iqn.2005-07.com.example:vdisk.iscsi3
SCSTLun(Lun_iscsi3)[61589]:	2012/09/10_15:27:46 INFO: Closing device iscsi3
SCSTLun(Lun_iscsi3)[61589]:	2012/09/10_15:27:46 INFO: Closing device iscsi3
SCSTLun(Lun_iscsi3)[61589]:	2012/09/10_15:27:46 INFO: Enabling target iqn.2005-07.com.example:vdisk.iscsi3
SCSTLun(Lun_iscsi3)[61589]:	2012/09/10_15:27:46 INFO: Enabling target iqn.2005-07.com.example:vdisk.iscsi3
SCSTLun(Lun_iscsi3)[61589]:	2012/09/10_15:27:46 INFO: Lun_iscsi3 stop : 0
SCSTLun(Lun_iscsi3)[61589]:	2012/09/10_15:27:46 INFO: Lun_iscsi3 stop : 0
Sep 10 15:27:46 Cluster-Server-1 lrmd: [48712]: info: Managed Lun_iscsi3:stop process 61589 exited with return code 0.
Sep 10 15:27:46 Cluster-Server-1 lrmd: [48712]: info: operation stop[50] on Lun_iscsi3 for client 48715: pid 61589 exited with return code 0
Sep 10 15:27:46 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce Lun_iscsi3 after complete stop op (interval=0)
Sep 10 15:27:46 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation Lun_iscsi3_stop_0 (call=50, rc=0, cib-update=104, confirmed=true) ok
Sep 10 15:27:46 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending stop op to history for 'Lun_iscsi3'
Sep 10 15:27:46 Cluster-Server-1 crmd: [48715]: debug: cancel_op: Cancelling op 31 for Target_iscsi3 (Target_iscsi3:31)
Sep 10 15:27:46 Cluster-Server-1 lrmd: [48712]: info: cancel_op: operation monitor[31] on Target_iscsi3 for client 48715, its parameters: crm_feature_set=[3.0.6] CRM_meta_name=[monitor] iqn=[iqn.2005-07.com.example:vdisk.iscsi3] CRM_meta_timeout=[60000] CRM_meta_interval=[10000]  cancelled
Sep 10 15:27:46 Cluster-Server-1 lrmd: [48712]: debug: on_msg_cancel_op: operation 31 cancelled
Sep 10 15:27:46 Cluster-Server-1 crmd: [48715]: debug: cancel_op: Op 31 for Target_iscsi3 (Target_iscsi3:31): cancelled
Sep 10 15:27:46 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=84:24:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=Target_iscsi3_stop_0
Sep 10 15:27:46 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation stop[51] on Target_iscsi3 for client 48715, its parameters: crm_feature_set=[3.0.6] CRM_meta_name=[stop] CRM_meta_timeout=[240000]  to the operation list.
Sep 10 15:27:46 Cluster-Server-1 lrmd: [48712]: info: rsc:Target_iscsi3 stop[51] (pid 61627)
Sep 10 15:27:46 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation Target_iscsi3_monitor_10000 (call=31, status=1, cib-update=0, confirmed=true) Cancelled
Sep 10 15:27:46 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending monitor op to history for 'Target_iscsi3'
SCSTTarget(Target_iscsi3)[61627]:	2012/09/10_15:27:46 INFO: target iqn.2005-07.com.example:vdisk.iscsi3: Stopping...
SCSTTarget(Target_iscsi3)[61627]:	2012/09/10_15:27:46 INFO: target iqn.2005-07.com.example:vdisk.iscsi3: Stopping...
SCSTTarget(Target_iscsi3)[61627]:	2012/09/10_15:27:46 INFO: disabling target iqn.2005-07.com.example:vdisk.iscsi3
SCSTTarget(Target_iscsi3)[61627]:	2012/09/10_15:27:46 INFO: disabling target iqn.2005-07.com.example:vdisk.iscsi3
SCSTTarget(Target_iscsi3)[61627]:	2012/09/10_15:27:46 INFO: deleting target iqn.2005-07.com.example:vdisk.iscsi3
SCSTTarget(Target_iscsi3)[61627]:	2012/09/10_15:27:46 INFO: deleting target iqn.2005-07.com.example:vdisk.iscsi3
SCSTTarget(Target_iscsi3)[61627]:	2012/09/10_15:27:46 INFO: target iqn.2005-07.com.example:vdisk.iscsi3: Stopped.
SCSTTarget(Target_iscsi3)[61627]:	2012/09/10_15:27:46 INFO: target iqn.2005-07.com.example:vdisk.iscsi3: Stopped.
SCSTTarget(Target_iscsi3)[61627]:	2012/09/10_15:27:46 DEBUG: Target_iscsi3 stop : 0
Sep 10 15:27:46 Cluster-Server-1 lrmd: [48712]: info: Managed Target_iscsi3:stop process 61627 exited with return code 0.
Sep 10 15:27:46 Cluster-Server-1 lrmd: [48712]: info: operation stop[51] on Target_iscsi3 for client 48715: pid 61627 exited with return code 0
Sep 10 15:27:46 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce Target_iscsi3 after complete stop op (interval=0)
Sep 10 15:27:46 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation Target_iscsi3_stop_0 (call=51, rc=0, cib-update=105, confirmed=true) ok
Sep 10 15:27:46 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending stop op to history for 'Target_iscsi3'
Sep 10 15:27:46 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_PingD:1 monitor[6] (pid 7856)
Sep 10 15:27:46 Cluster-Server-2 cib: [40192]: debug: activateCibXml: Triggering CIB write for cib_replace op
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_replace): 0.18.31 -> 0.19.1 (S_IDLE)
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.19.1) : Non-status change
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause <diff crm_feature_set="3.0.6" >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   <diff-removed admin_epoch="0" epoch="18" num_updates="31" >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     <cib admin_epoch="0" epoch="18" num_updates="31" >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       <configuration >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         <resources >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <group id="iSCSI_iscsi3" >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <primitive id="Target_iscsi3" >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <meta_attributes id="Target_iscsi3-meta_attributes" >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="Target_iscsi3-meta_attributes-target-role" name="target-role" value="Started" __crm_diff_marker__="removed:top" />
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </meta_attributes>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             </primitive>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <primitive id="Lun_iscsi3" >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <meta_attributes id="Lun_iscsi3-meta_attributes" >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="Lun_iscsi3-meta_attributes-target-role" name="target-role" value="Started" __crm_diff_marker__="removed:top" />
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </meta_attributes>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             </primitive>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           </group>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         </resources>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       </configuration>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     </cib>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   </diff-removed>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   <diff-added >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     <cib epoch="19" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="Cluster-Server-1" update-client="cibadmin" cib-last-written="Mon Sep 10 15:27:26 2012" have-quorum="1" dc-uuid="Cluster-Server-2" >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       <configuration >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         <resources >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <group id="iSCSI_iscsi3" >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <meta_attributes id="iSCSI_iscsi3-meta_attributes" __crm_diff_marker__="added:top" >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <nvpair id="iSCSI_iscsi3-meta_attributes-target-role" name="target-role" value="Stopped" />
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             </meta_attributes>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           </group>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         </resources>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       </configuration>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     </cib>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   </diff-added>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause </diff>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_IDLE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes are eligible to run resources.
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 325: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:27:46 Cluster-Server-2 cib: [40192]: info: cib:diff: - <cib admin_epoch="0" epoch="18" num_updates="31" >
Sep 10 15:27:46 Cluster-Server-2 cib: [40192]: info: cib:diff: -   <configuration >
Sep 10 15:27:46 Cluster-Server-2 cib: [40192]: info: cib:diff: -     <resources >
Sep 10 15:27:46 Cluster-Server-2 cib: [40192]: info: cib:diff: -       <group id="iSCSI_iscsi3" >
Sep 10 15:27:46 Cluster-Server-2 cib: [40192]: info: cib:diff: -         <primitive id="Target_iscsi3" >
Sep 10 15:27:46 Cluster-Server-2 cib: [40192]: info: cib:diff: -           <meta_attributes id="Target_iscsi3-meta_attributes" >
Sep 10 15:27:46 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <nvpair id="Target_iscsi3-meta_attributes-target-role" name="target-role" value="Started" __crm_diff_marker__="removed:top" />
Sep 10 15:27:46 Cluster-Server-2 cib: [40192]: info: cib:diff: -           </meta_attributes>
Sep 10 15:27:46 Cluster-Server-2 cib: [40192]: info: cib:diff: -         </primitive>
Sep 10 15:27:46 Cluster-Server-2 cib: [40192]: info: cib:diff: -         <primitive id="Lun_iscsi3" >
Sep 10 15:27:46 Cluster-Server-2 cib: [40192]: info: cib:diff: -           <meta_attributes id="Lun_iscsi3-meta_attributes" >
Sep 10 15:27:46 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <nvpair id="Lun_iscsi3-meta_attributes-target-role" name="target-role" value="Started" __crm_diff_marker__="removed:top" />
Sep 10 15:27:46 Cluster-Server-2 cib: [40192]: info: cib:diff: -           </meta_attributes>
Sep 10 15:27:46 Cluster-Server-2 cib: [40192]: info: cib:diff: -         </primitive>
Sep 10 15:27:46 Cluster-Server-2 cib: [40192]: info: cib:diff: -       </group>
Sep 10 15:27:46 Cluster-Server-2 cib: [40192]: info: cib:diff: -     </resources>
Sep 10 15:27:46 Cluster-Server-2 cib: [40192]: info: cib:diff: -   </configuration>
Sep 10 15:27:46 Cluster-Server-2 cib: [40192]: info: cib:diff: - </cib>
Sep 10 15:27:46 Cluster-Server-2 cib: [40192]: info: cib:diff: + <cib epoch="19" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="Cluster-Server-1" update-client="cibadmin" cib-last-written="Mon Sep 10 15:27:26 2012" have-quorum="1" dc-uuid="Cluster-Server-2" >
Sep 10 15:27:46 Cluster-Server-2 cib: [40192]: info: cib:diff: +   <configuration >
Sep 10 15:27:46 Cluster-Server-2 cib: [40192]: info: cib:diff: +     <resources >
Sep 10 15:27:46 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <group id="iSCSI_iscsi3" >
Sep 10 15:27:46 Cluster-Server-2 cib: [40192]: info: cib:diff: +         <meta_attributes id="iSCSI_iscsi3-meta_attributes" __crm_diff_marker__="added:top" >
Sep 10 15:27:46 Cluster-Server-2 cib: [40192]: info: cib:diff: +           <nvpair id="iSCSI_iscsi3-meta_attributes-target-role" name="target-role" value="Stopped" />
Sep 10 15:27:46 Cluster-Server-2 cib: [40192]: info: cib:diff: +         </meta_attributes>
Sep 10 15:27:46 Cluster-Server-2 cib: [40192]: info: cib:diff: +       </group>
Sep 10 15:27:46 Cluster-Server-2 cib: [40192]: info: cib:diff: +     </resources>
Sep 10 15:27:46 Cluster-Server-2 cib: [40192]: info: cib:diff: +   </configuration>
Sep 10 15:27:46 Cluster-Server-2 cib: [40192]: info: cib:diff: + </cib>
Sep 10 15:27:46 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_replace for section resources (origin=Cluster-Server-1/cibadmin/2, version=0.19.1): ok (rc=0)
Sep 10 15:27:46 Cluster-Server-2 cib: [40192]: debug: Forking temp process write_cib_contents
Sep 10 15:27:46 Cluster-Server-2 cib: [40192]: debug: sync_our_cib: Syncing CIB to Cluster-Server-1
Sep 10 15:27:46 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=Cluster-Server-1/Cluster-Server-1/(null), version=0.19.1): ok (rc=0)
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke_callback: Invoking the PE: query=325, ref=pe_calc-dc-1347283666-200, seq=312, quorate=1
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: unpack_config: Startup probes: enabled
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH timeout: 60000
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH of failed nodes is disabled
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Stop all active resources: false
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Default stickiness: 0
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: notice: unpack_config: On loss of CCM Quorum: Ignore
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: unpack_domains: Unpacking domains
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-1 is online
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-2 is online
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: process_orphan_resource: Detected orphan resource ExportFS_nfs1 on Cluster-Server-1
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: create_fake_resource: Orphan resource <primitive id="ExportFS_nfs1" type="exportfs" class="ocf" provider="nas" />
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: process_orphan_resource: Detected orphan resource FS_nfs1 on Cluster-Server-1
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: create_fake_resource: Orphan resource <primitive id="FS_nfs1" type="Filesystem" class="ocf" provider="nas" />
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: LVM_drive_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource LVM_drive active on Cluster-Server-1
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: process_orphan_resource: Detected orphan resource Lun_iscsi1 on Cluster-Server-1
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: create_fake_resource: Orphan resource <primitive id="Lun_iscsi1" type="SCSTLun" class="ocf" provider="nas" />
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_Device_drive:0_last_failure_0 on Cluster-Server-1 returned 8 (master) instead of the expected value: 7 (not running)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Operation monitor found resource p_Device_drive:0 active in master mode on Cluster-Server-1
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: unpack_lrm_rsc_state: Target_iscsi3: Overwriting calculated next role Unknown with requested next role Stopped
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: process_orphan_resource: Detected orphan resource Target_iscsi1 on Cluster-Server-1
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: create_fake_resource: Orphan resource <primitive id="Target_iscsi1" type="SCSTTarget" class="ocf" provider="nas" />
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: unpack_lrm_rsc_state: Lun_iscsi3: Overwriting calculated next role Unknown with requested next role Stopped
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: unpack_lrm_rsc_state: Target_iscsi3: Overwriting calculated next role Unknown with requested next role Stopped
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: FS_nfs2_last_failure_0 on Cluster-Server-2 returned 5 (not installed) instead of the expected value: 7 (not running)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Preventing FS_nfs2 from re-starting on Cluster-Server-2: operation monitor failed 'not installed' (rc=5)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: ExportFS_nfs2_last_failure_0 on Cluster-Server-2 returned 5 (not installed) instead of the expected value: 7 (not running)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Preventing ExportFS_nfs2 from re-starting on Cluster-Server-2: operation monitor failed 'not installed' (rc=5)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: FS_nfs1_last_failure_0 on Cluster-Server-2 returned 5 (not installed) instead of the expected value: 7 (not running)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Preventing FS_nfs1 from re-starting on Cluster-Server-2: operation monitor failed 'not installed' (rc=5)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_Device_drive:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: ExportFS_nfs1_last_failure_0 on Cluster-Server-2 returned 5 (not installed) instead of the expected value: 7 (not running)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Preventing ExportFS_nfs1 from re-starting on Cluster-Server-2: operation monitor failed 'not installed' (rc=5)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: unpack_lrm_rsc_state: Lun_iscsi3: Overwriting calculated next role Unknown with requested next role Stopped
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: NFS_Server [p_NFS_Server]
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: iSCSI_Daemon [p_iSCSI_Daemon]
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: PingD [p_PingD]
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: clone_print:  Master/Slave Set: Device_drive [p_Device_drive]
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:0 active on Cluster-Server-1
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:0 active on Cluster-Server-1
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: short_print:      Masters: [ Cluster-Server-1 ]
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: short_print:      Slaves: [ Cluster-Server-2 ]
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: native_print: LVM_drive	(ocf::nas:LVM2):	Started Cluster-Server-1
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: iSCSI_iscsi2
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: native_print:      Target_iscsi2	(ocf::nas:SCSTTarget):	Started Cluster-Server-1
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: native_print:      Lun_iscsi2	(ocf::nas:SCSTLun):	Started Cluster-Server-1
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: iSCSI_iscsi3
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: native_print:      Target_iscsi3	(ocf::nas:SCSTTarget):	Started Cluster-Server-1
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: native_print:      Lun_iscsi3	(ocf::nas:SCSTLun):	Started Cluster-Server-1
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: NFS_nfs2
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: native_print:      FS_nfs2	(ocf::nas:Filesystem):	Started Cluster-Server-1
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: native_print:      ExportFS_nfs2	(ocf::nas:exportfs):	Started Cluster-Server-1
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:0: preferring current location (node=Cluster-Server-1, weight=1)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:1: preferring current location (node=Cluster-Server-2, weight=1)
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.19.1 -> 0.19.2 (S_POLICY_ENGINE)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_NFS_Server:0
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_NFS_Server:1
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 NFS_Server instances of a possible 2
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_iSCSI_Daemon:0
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_iSCSI_Daemon:1
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 iSCSI_Daemon instances of a possible 2
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_PingD:0
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_PingD:1
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 PingD instances of a possible 2
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_Device_drive:0
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_Device_drive:1
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 Device_drive instances of a possible 2
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:0 master score: 10150
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: master_color: Promoting p_Device_drive:0 (Master Cluster-Server-1)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:1 master score: 10100
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: master_color: Device_drive: Promoted 1 instances of a possible 1 to master
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to LVM_drive
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Target_iscsi2
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Lun_iscsi2
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Target_iscsi3 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Target_iscsi3
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: native_color: Resource Target_iscsi3 cannot run anywhere
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Lun_iscsi3 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Lun_iscsi3
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: native_color: Resource Lun_iscsi3 cannot run anywhere
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to FS_nfs2
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to ExportFS_nfs2
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource ExportFS_nfs1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for ExportFS_nfs1
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource FS_nfs1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for FS_nfs1
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Lun_iscsi1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Lun_iscsi1
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Target_iscsi1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Target_iscsi1
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: debug: master_create_actions: Creating actions for Device_drive
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.19.2 -> 0.19.3 (S_POLICY_ENGINE)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:0	(Started Cluster-Server-1)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:1	(Started Cluster-Server-2)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:0	(Started Cluster-Server-1)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:1	(Started Cluster-Server-2)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:0	(Started Cluster-Server-1)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:1	(Started Cluster-Server-2)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_Device_drive:0	(Master Cluster-Server-1)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_Device_drive:1	(Slave Cluster-Server-2)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   LVM_drive	(Started Cluster-Server-1)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Target_iscsi2	(Started Cluster-Server-1)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Lun_iscsi2	(Started Cluster-Server-1)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: notice: LogActions: Stop    Target_iscsi3	(Cluster-Server-1)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: notice: LogActions: Stop    Lun_iscsi3	(Cluster-Server-1)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   FS_nfs2	(Started Cluster-Server-1)
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   ExportFS_nfs2	(Started Cluster-Server-1)
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.19.3 -> 0.19.4 (S_POLICY_ENGINE)
Sep 10 15:27:46 Cluster-Server-2 cib: [7872]: ERROR: validate_cib_digest: Digest comparision failed: expected 915815b4e68298e50146ce8cc380b8d9 (/var/lib/heartbeat/crm/cib.L6EHMo), calculated 0121bf5cffe4dab4f888959f1baf7322
Sep 10 15:27:46 Cluster-Server-2 cib: [7872]: ERROR: retrieveCib: Checksum of /var/lib/heartbeat/crm/cib.nb01wl failed!  Configuration contents ignored!
Sep 10 15:27:46 Cluster-Server-2 cib: [7872]: ERROR: retrieveCib: Usually this is caused by manual changes, please refer to http://clusterlabs.org/wiki/FAQ#cib_changes_detected
Sep 10 15:27:46 Cluster-Server-2 cib: [7872]: ERROR: crm_abort: write_cib_contents: Triggered fatal assert at io.c:662 : retrieveCib(tmp1, tmp2, FALSE) != NULL
Sep 10 15:27:46 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='master-p_Device_drive:0'] does not exist
Sep 10 15:27:46 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update -22 for master-p_Device_drive:0=(null) passed
Sep 10 15:27:46 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:27:46 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-pingd" name="pingd" value="100" />
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.19.4 -> 0.19.5 (S_POLICY_ENGINE)
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_PE_SUCCESS from handle_response() received in state S_POLICY_ENGINE
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log <create_reply_adv origin="process_pe_message" t="crmd" version="3.0.6" subt="response" reference="pe_calc-dc-1347283666-200" crm_task="pe_calc" crm_sys_to="dc" crm_sys_from="pengine" crm-tgraph-in="/var/lib/pengine/pe-input-24.bz2" graph-errors="0" graph-warnings="0" config-errors="0" config-warnings="0" >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   <crm_xml >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log     <transition_graph cluster-delay="60s" stonith-timeout="60s" failed-stop-offset="INFINITY" failed-start-offset="INFINITY" batch-limit="30" transition_id="24" >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="0" >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="89" operation="stopped" operation_key="iSCSI_iscsi3_stopped_0" >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="84" operation="stop" operation_key="Target_iscsi3_stop_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="85" operation="stop" operation_key="Lun_iscsi3_stop_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="88" operation="stop" operation_key="iSCSI_iscsi3_stop_0" />
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="1" >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="88" operation="stop" operation_key="iSCSI_iscsi3_stop_0" >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_timeout="20000" crm_feature_set="3.0.6" />
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="2" >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="84" operation="stop" operation_key="Target_iscsi3_stop_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="Target_iscsi3" long-id="iSCSI_iscsi3:Target_iscsi3" class="ocf" provider="nas" type="SCSTTarget" />
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_name="stop" CRM_meta_timeout="240000" crm_feature_set="3.0.6" iqn="iqn.2005-07.com.example:vdisk.iscsi3" />
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="85" operation="stop" operation_key="Lun_iscsi3_stop_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="88" operation="stop" operation_key="iSCSI_iscsi3_stop_0" />
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="3" >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="85" operation="stop" operation_key="Lun_iscsi3_stop_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="Lun_iscsi3" long-id="iSCSI_iscsi3:Lun_iscsi3" class="ocf" provider="nas" type="SCSTLun" />
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_name="stop" CRM_meta_timeout="240000" crm_feature_set="3.0.6" device_name="iscsi3" handler="vdisk_blockio" lun="0" path="/dev/drive-CSD/iscsi3_iSCSI" target_iqn="iqn.2005-07.com.example:vdisk.iscsi3" />
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <pseudo_event id="88" operation="stop" operation_key="iSCSI_iscsi3_stop_0" />
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="4" >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="15" operation="all_stopped" operation_key="all_stopped" >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes crm_feature_set="3.0.6" />
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="84" operation="stop" operation_key="Target_iscsi3_stop_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="85" operation="stop" operation_key="Lun_iscsi3_stop_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log     </transition_graph>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   </crm_xml>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log </create_reply_adv>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_INVOKE
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: unpack_graph: Unpacked transition 24: 5 actions in 5 synapses
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: info: do_te_invoke: Processing graph 24 (ref=pe_calc-dc-1347283666-200) derived from /var/lib/pengine/pe-input-24.bz2
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: te_pseudo_action: Pseudo action 88 fired and confirmed
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 85: stop Lun_iscsi3_stop_0 on Cluster-Server-1
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 24 (Complete=0, Pending=1, Fired=2, Skipped=0, Incomplete=3, Source=/var/lib/pengine/pe-input-24.bz2): In-progress
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 24 (Complete=1, Pending=1, Fired=0, Skipped=0, Incomplete=3, Source=/var/lib/pengine/pe-input-24.bz2): In-progress
Sep 10 15:27:46 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:27:46 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-probe_complete" name="probe_complete" value="true" />
Sep 10 15:27:46 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 186 for pingd=100 passed
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.19.5 -> 0.19.6 (S_TRANSITION_ENGINE)
Sep 10 15:27:46 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 188 for probe_complete=true passed
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.19.6 -> 0.19.7 (S_TRANSITION_ENGINE)
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action Lun_iscsi3_stop_0 (85) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 84: stop Target_iscsi3_stop_0 on Cluster-Server-1
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 24 (Complete=2, Pending=1, Fired=1, Skipped=0, Incomplete=2, Source=/var/lib/pengine/pe-input-24.bz2): In-progress
Sep 10 15:27:46 Cluster-Server-2 pengine: [40196]: notice: process_pe_message: Transition 24: PEngine Input stored in: /var/lib/pengine/pe-input-24.bz2
Sep 10 15:27:46 Cluster-Server-2 cib: [40192]: WARN: Managed write_cib_contents process 7872 killed by signal 6 [SIGABRT - Abort].
Sep 10 15:27:46 Cluster-Server-2 cib: [40192]: ERROR: Managed write_cib_contents process 7872 dumped core
Sep 10 15:27:46 Cluster-Server-2 cib: [40192]: ERROR: cib_diskwrite_complete: Disk write failed: status=134, signo=6, exitcode=0
Sep 10 15:27:46 Cluster-Server-2 cib: [40192]: ERROR: cib_diskwrite_complete: Disabling disk writes after write failure
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.19.7 -> 0.19.8 (S_TRANSITION_ENGINE)
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action Target_iscsi3_stop_0 (84) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: te_pseudo_action: Pseudo action 89 fired and confirmed
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: te_pseudo_action: Pseudo action 15 fired and confirmed
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 24 (Complete=3, Pending=0, Fired=2, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-24.bz2): In-progress
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: notice: run_graph: ==== Transition 24 (Complete=5, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-24.bz2): Complete
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: te_graph_trigger: Transition 24 is now complete
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Transition 24 status: done - <null>
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_TE_SUCCESS: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_TE_SUCCESS from notify_crmd() received in state S_TRANSITION_ENGINE
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: Starting PEngine Recheck Timer
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started PEngine Recheck Timer (I_PE_CALC:900000ms), src=378
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:27:46 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:27:48 Cluster-Server-1 attrd_updater: [61666]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:27:48 Cluster-Server-1 attrd_updater: [61666]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:27:48 Cluster-Server-1 attrd_updater: [61666]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:27:48 Cluster-Server-1 attrd_updater: [61666]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:27:48 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:27:48 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:27:48 Cluster-Server-1 lrmd: [48712]: info: operation monitor[6] on p_PingD:0 for client 48715: pid 61560 exited with return code 0
Sep 10 15:27:48 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_resource: fail-count-Target_iscsi3=<null>
Sep 10 15:27:48 Cluster-Server-1 attrd: [48713]: info: find_hash_entry: Creating hash entry for fail-count-Target_iscsi3
Sep 10 15:27:48 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: (null), Current: (null), Stored: (null)
Sep 10 15:27:48 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_resource: fail-count-Lun_iscsi3=<null>
Sep 10 15:27:48 Cluster-Server-1 attrd: [48713]: info: find_hash_entry: Creating hash entry for fail-count-Lun_iscsi3
Sep 10 15:27:48 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: (null), Current: (null), Stored: (null)
Sep 10 15:27:48 Cluster-Server-1 cib: [48709]: info: apply_xml_diff: Digest mis-match: expected 9b4ba835a224eb572f4b072ce66576c0, calculated 149597b6959d5b40e81a6b0cfd00bd4c
Sep 10 15:27:48 Cluster-Server-1 cib: [48709]: notice: cib_process_diff: Diff 0.19.8 -> 0.19.9 not applied to 0.19.8: Failed application of an update diff
Sep 10 15:27:48 Cluster-Server-1 cib: [48709]: info: cib_server_process_diff: Requesting re-sync from peer
Sep 10 15:27:48 Cluster-Server-1 crmd: [48715]: info: delete_resource: Removing resource Target_iscsi3 for 61717_crm_resource (internal) on Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-1 lrmd: [48712]: debug: lrmd_rsc_destroy: removing resource Target_iscsi3
Sep 10 15:27:48 Cluster-Server-1 crmd: [48715]: debug: delete_rsc_entry: sync: Sending delete op for Target_iscsi3
Sep 10 15:27:48 Cluster-Server-1 crmd: [48715]: info: notify_deleted: Notifying 61717_crm_resource on Cluster-Server-1 that Target_iscsi3 was deleted
Sep 10 15:27:48 Cluster-Server-1 crmd: [48715]: WARN: decode_transition_key: Bad UUID (crm-resource-61717) in sscanf result (3) for 0:0:crm-resource-61717
Sep 10 15:27:48 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: send_direct_ack: Updating resouce Target_iscsi3 after complete delete op (interval=60000)
Sep 10 15:27:48 Cluster-Server-1 crmd: [48715]: debug: send_direct_ack: ACK'ing resource op Target_iscsi3_delete_60000 from 0:0:crm-resource-61717: lrm_invoke-lrmd-1347283668-43
Sep 10 15:27:48 Cluster-Server-1 crmd: [48715]: debug: notify_deleted: Triggering a refresh after 61717_crm_resource deleted Target_iscsi3 from the LRM
Sep 10 15:27:48 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//cluster_property_set//nvpair[@name='last-lrm-refresh'] (/cib/configuration/crm_config/cluster_property_set/nvpair[8])
Sep 10 15:27:48 Cluster-Server-1 crmd: [48715]: debug: find_nvpair_attr_delegate: Match <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1347283645" />
Sep 10 15:27:48 Cluster-Server-1 cib: [48709]: info: apply_xml_diff: Digest mis-match: expected e3dfc19c398b1993cd25df90233a259e, calculated 1bc6a11a9f3debfd056f550f494795ad
Sep 10 15:27:48 Cluster-Server-1 cib: [48709]: notice: cib_process_diff: Diff 0.20.1 -> 0.20.2 not applied to 0.20.1: Failed application of an update diff
Sep 10 15:27:48 Cluster-Server-1 cib: [48709]: info: cib_server_process_diff: Requesting re-sync from peer
Sep 10 15:27:48 Cluster-Server-1 crmd: [48715]: info: delete_resource: Removing resource Lun_iscsi3 for 61717_crm_resource (internal) on Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-1 cib: [48709]: notice: cib_server_process_diff: Not applying diff 0.20.1 -> 0.20.2 (sync in progress)
Sep 10 15:27:48 Cluster-Server-1 lrmd: [48712]: debug: lrmd_rsc_destroy: removing resource Lun_iscsi3
Sep 10 15:27:48 Cluster-Server-1 crmd: [48715]: debug: delete_rsc_entry: sync: Sending delete op for Lun_iscsi3
Sep 10 15:27:48 Cluster-Server-1 crmd: [48715]: info: notify_deleted: Notifying 61717_crm_resource on Cluster-Server-1 that Lun_iscsi3 was deleted
Sep 10 15:27:48 Cluster-Server-1 crmd: [48715]: WARN: decode_transition_key: Bad UUID (crm-resource-61717) in sscanf result (3) for 0:0:crm-resource-61717
Sep 10 15:27:48 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: send_direct_ack: Updating resouce Lun_iscsi3 after complete delete op (interval=60000)
Sep 10 15:27:48 Cluster-Server-1 crmd: [48715]: debug: send_direct_ack: ACK'ing resource op Lun_iscsi3_delete_60000 from 0:0:crm-resource-61717: lrm_invoke-lrmd-1347283668-44
Sep 10 15:27:48 Cluster-Server-1 crmd: [48715]: debug: notify_deleted: Triggering a refresh after 61717_crm_resource deleted Lun_iscsi3 from the LRM
Sep 10 15:27:48 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//cluster_property_set//nvpair[@name='last-lrm-refresh'] (/cib/configuration/crm_config/cluster_property_set/nvpair[8])
Sep 10 15:27:48 Cluster-Server-1 crmd: [48715]: debug: find_nvpair_attr_delegate: Match <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1347283668" />
Sep 10 15:27:48 Cluster-Server-1 cib: [48709]: notice: cib_server_process_diff: Not applying diff 0.20.1 -> 0.20.2 (sync in progress)
Sep 10 15:27:48 Cluster-Server-1 cib: [48709]: notice: cib_server_process_diff: Not applying diff 0.20.2 -> 0.20.3 (sync in progress)
Sep 10 15:27:48 Cluster-Server-1 cib: [48709]: notice: cib_server_process_diff: Not applying diff 0.20.3 -> 0.20.4 (sync in progress)
Sep 10 15:27:48 Cluster-Server-1 cib: [48709]: info: cib_replace_notify: Replaced: -1.-1.-1 -> 0.20.3 from Cluster-Server-2
Sep 10 15:27:48 Cluster-Server-1 crmd: [48715]: debug: do_cib_replaced: Updating the CIB after a replace: DC=false
Sep 10 15:27:48 Cluster-Server-1 attrd: [48713]: info: do_cib_replaced: Sending full refresh
Sep 10 15:27:48 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_Device_drive:0 (10000)
Sep 10 15:27:48 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='master-p_Device_drive:0'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[3])
Sep 10 15:27:48 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-master-p_Device_drive.0" name="master-p_Device_drive:0" value="10000" />
Sep 10 15:27:48 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: pingd (100)
Sep 10 15:27:48 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Sep 10 15:27:48 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:27:48 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-probe_complete" name="probe_complete" value="true" />
Sep 10 15:27:48 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:27:48 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-pingd" name="pingd" value="100" />
Sep 10 15:27:48 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 190 for master-p_Device_drive:0=10000 passed
Sep 10 15:27:48 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 192 for probe_complete=true passed
Sep 10 15:27:48 Cluster-Server-1 lrmd: [48712]: debug: on_msg_add_rsc:client [48715] adds resource Target_iscsi3
Sep 10 15:27:48 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=16:26:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=Target_iscsi3_monitor_0
Sep 10 15:27:48 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 194 for pingd=100 passed
Sep 10 15:27:48 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op:2399: copying parameters for rsc Target_iscsi3
Sep 10 15:27:48 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation monitor[52] on Target_iscsi3 for client 48715, its parameters: crm_feature_set=[3.0.6] iqn=[iqn.2005-07.com.example:vdisk.iscsi3] CRM_meta_timeout=[20000]  to the operation list.
Sep 10 15:27:48 Cluster-Server-1 lrmd: [48712]: info: rsc:Target_iscsi3 probe[52] (pid 61725)
Sep 10 15:27:48 Cluster-Server-1 lrmd: [48712]: debug: on_msg_add_rsc:client [48715] adds resource Lun_iscsi3
Sep 10 15:27:48 Cluster-Server-1 crmd: [48715]: debug: do_lrm_rsc_op: Performing key=17:26:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=Lun_iscsi3_monitor_0
Sep 10 15:27:48 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op:2399: copying parameters for rsc Lun_iscsi3
Sep 10 15:27:48 Cluster-Server-1 lrmd: [48712]: debug: on_msg_perform_op: add an operation operation monitor[53] on Lun_iscsi3 for client 48715, its parameters: path=[/dev/drive-CSD/iscsi3_iSCSI] crm_feature_set=[3.0.6] lun=[0] handler=[vdisk_blockio] device_name=[iscsi3] CRM_meta_timeout=[20000] target_iqn=[iqn.2005-07.com.example:vdisk.iscsi3]  to the operation list.
Sep 10 15:27:48 Cluster-Server-1 lrmd: [48712]: info: rsc:Lun_iscsi3 probe[53] (pid 61726)
SCSTLun(Lun_iscsi3)[61726]:	2012/09/10_15:27:48 INFO: Lun_iscsi3 monitor : 7
SCSTTarget(Target_iscsi3)[61725]:	2012/09/10_15:27:48 DEBUG: Target_iscsi3 monitor : 7
Sep 10 15:27:48 Cluster-Server-1 lrmd: [48712]: WARN: Managed Target_iscsi3:monitor process 61725 exited with return code 7.
Sep 10 15:27:48 Cluster-Server-1 lrmd: [48712]: info: operation monitor[52] on Target_iscsi3 for client 48715: pid 61725 exited with return code 7
Sep 10 15:27:48 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce Target_iscsi3 after complete monitor op (interval=0)
Sep 10 15:27:48 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation Target_iscsi3_monitor_0 (call=52, rc=7, cib-update=114, confirmed=true) not running
Sep 10 15:27:48 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending monitor op to history for 'Target_iscsi3'
SCSTLun(Lun_iscsi3)[61726]:	2012/09/10_15:27:48 INFO: Lun_iscsi3 monitor : 7
Sep 10 15:27:48 Cluster-Server-1 lrmd: [48712]: WARN: Managed Lun_iscsi3:monitor process 61726 exited with return code 7.
Sep 10 15:27:48 Cluster-Server-1 lrmd: [48712]: info: operation monitor[53] on Lun_iscsi3 for client 48715: pid 61726 exited with return code 7
Sep 10 15:27:48 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: do_update_resource: Updating resouce Lun_iscsi3 after complete monitor op (interval=0)
Sep 10 15:27:48 Cluster-Server-1 crmd: [48715]: info: process_lrm_event: LRM operation Lun_iscsi3_monitor_0 (call=53, rc=7, cib-update=115, confirmed=true) not running
Sep 10 15:27:48 Cluster-Server-1 crmd: [48715]: debug: update_history_cache: Appending monitor op to history for 'Lun_iscsi3'
Sep 10 15:27:48 Cluster-Server-1 crmd: [48715]: debug: attrd_update_delegate: Sent update: probe_complete=true for localhost
Sep 10 15:27:48 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crmd: probe_complete=true
Sep 10 15:27:48 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: true, Current: true, Stored: true
Sep 10 15:27:48 Cluster-Server-2 attrd_updater: [7930]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:27:48 Cluster-Server-2 attrd_updater: [7930]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:27:48 Cluster-Server-2 attrd_updater: [7930]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:27:48 Cluster-Server-2 attrd_updater: [7930]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:27:48 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:27:48 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:27:48 Cluster-Server-2 lrmd: [40194]: info: operation monitor[6] on p_PingD:1 for client 40197: pid 7856 exited with return code 0
Sep 10 15:27:48 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_Device_drive:1 monitor[11] (pid 7931)
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-1']//lrm_resource[@id='Target_iscsi3'] (/cib/status/node_state[1]/lrm/lrm_resources/lrm_resource[13])
Sep 10 15:27:48 Cluster-Server-2 attrd: [40195]: notice: attrd_ais_dispatch: Update relayed from Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from Cluster-Server-1: fail-count-Target_iscsi3=<null>
Sep 10 15:27:48 Cluster-Server-2 attrd: [40195]: info: find_hash_entry: Creating hash entry for fail-count-Target_iscsi3
Sep 10 15:27:48 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: (null), Current: (null), Stored: (null)
Sep 10 15:27:48 Cluster-Server-2 attrd: [40195]: notice: attrd_ais_dispatch: Update relayed from Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from Cluster-Server-1: fail-count-Lun_iscsi3=<null>
Sep 10 15:27:48 Cluster-Server-2 attrd: [40195]: info: find_hash_entry: Creating hash entry for fail-count-Lun_iscsi3
Sep 10 15:27:48 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: (null), Current: (null), Stored: (null)
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_delete): 0.19.8 -> 0.19.9 (S_IDLE)
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: No match for deleted action //diff-added//cib//lrm_rsc_op[@id='Target_iscsi3_last_0'] (Target_iscsi3_last_0 on Cluster-Server-1)
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:320 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=Target_iscsi3_last_0, magic=0:0;84:24:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66, cib=0.19.9) : Resource op removal
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_IDLE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes are eligible to run resources.
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 326: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-1']//lrm_resource[@id='Target_iscsi3'] (origin=Cluster-Server-1/crmd/106, version=0.19.8): ok (rc=0)
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke_callback: Invoking the PE: query=326, ref=pe_calc-dc-1347283668-203, seq=312, quorate=1
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: debug: sync_our_cib: Syncing CIB to Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: unpack_config: Startup probes: enabled
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH timeout: 60000
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH of failed nodes is disabled
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Stop all active resources: false
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Default stickiness: 0
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: notice: unpack_config: On loss of CCM Quorum: Ignore
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: unpack_domains: Unpacking domains
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=Cluster-Server-1/Cluster-Server-1/(null), version=0.19.8): ok (rc=0)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-1 is online
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-2 is online
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: process_orphan_resource: Detected orphan resource ExportFS_nfs1 on Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: create_fake_resource: Orphan resource <primitive id="ExportFS_nfs1" type="exportfs" class="ocf" provider="nas" />
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: process_orphan_resource: Detected orphan resource FS_nfs1 on Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: create_fake_resource: Orphan resource <primitive id="FS_nfs1" type="Filesystem" class="ocf" provider="nas" />
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-1']//lrm_resource[@id='Target_iscsi3'] (/cib/status/node_state[1]/lrm/lrm_resources/lrm_resource[13])
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: LVM_drive_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource LVM_drive active on Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: process_orphan_resource: Detected orphan resource Lun_iscsi1 on Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: create_fake_resource: Orphan resource <primitive id="Lun_iscsi1" type="SCSTLun" class="ocf" provider="nas" />
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_Device_drive:0_last_failure_0 on Cluster-Server-1 returned 8 (master) instead of the expected value: 7 (not running)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Operation monitor found resource p_Device_drive:0 active in master mode on Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: unpack_lrm_rsc_state: Target_iscsi3: Overwriting calculated next role Unknown with requested next role Stopped
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: process_orphan_resource: Detected orphan resource Target_iscsi1 on Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: create_fake_resource: Orphan resource <primitive id="Target_iscsi1" type="SCSTTarget" class="ocf" provider="nas" />
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: unpack_lrm_rsc_state: Lun_iscsi3: Overwriting calculated next role Unknown with requested next role Stopped
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: unpack_lrm_rsc_state: Target_iscsi3: Overwriting calculated next role Unknown with requested next role Stopped
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: FS_nfs2_last_failure_0 on Cluster-Server-2 returned 5 (not installed) instead of the expected value: 7 (not running)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Preventing FS_nfs2 from re-starting on Cluster-Server-2: operation monitor failed 'not installed' (rc=5)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: ExportFS_nfs2_last_failure_0 on Cluster-Server-2 returned 5 (not installed) instead of the expected value: 7 (not running)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Preventing ExportFS_nfs2 from re-starting on Cluster-Server-2: operation monitor failed 'not installed' (rc=5)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: FS_nfs1_last_failure_0 on Cluster-Server-2 returned 5 (not installed) instead of the expected value: 7 (not running)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Preventing FS_nfs1 from re-starting on Cluster-Server-2: operation monitor failed 'not installed' (rc=5)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_Device_drive:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: ExportFS_nfs1_last_failure_0 on Cluster-Server-2 returned 5 (not installed) instead of the expected value: 7 (not running)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Preventing ExportFS_nfs1 from re-starting on Cluster-Server-2: operation monitor failed 'not installed' (rc=5)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: unpack_lrm_rsc_state: Lun_iscsi3: Overwriting calculated next role Unknown with requested next role Stopped
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-1']//lrm_resource[@id='Target_iscsi3'] (origin=Cluster-Server-1/crmd/107, version=0.19.9): ok (rc=0)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: NFS_Server [p_NFS_Server]
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: iSCSI_Daemon [p_iSCSI_Daemon]
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: PingD [p_PingD]
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: clone_print:  Master/Slave Set: Device_drive [p_Device_drive]
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:0 active on Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:0 active on Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: short_print:      Masters: [ Cluster-Server-1 ]
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: short_print:      Slaves: [ Cluster-Server-2 ]
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: native_print: LVM_drive	(ocf::nas:LVM2):	Started Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: iSCSI_iscsi2
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: native_print:      Target_iscsi2	(ocf::nas:SCSTTarget):	Started Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: native_print:      Lun_iscsi2	(ocf::nas:SCSTLun):	Started Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: iSCSI_iscsi3
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: native_print:      Target_iscsi3	(ocf::nas:SCSTTarget):	Stopped 
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: native_print:      Lun_iscsi3	(ocf::nas:SCSTLun):	Stopped 
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: NFS_nfs2
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: native_print:      FS_nfs2	(ocf::nas:Filesystem):	Started Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: native_print:      ExportFS_nfs2	(ocf::nas:exportfs):	Started Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:0: preferring current location (node=Cluster-Server-1, weight=1)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:1: preferring current location (node=Cluster-Server-2, weight=1)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_NFS_Server:0
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_NFS_Server:1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 NFS_Server instances of a possible 2
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_iSCSI_Daemon:0
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_iSCSI_Daemon:1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 iSCSI_Daemon instances of a possible 2
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_PingD:0
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_PingD:1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 PingD instances of a possible 2
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_Device_drive:0
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_Device_drive:1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 Device_drive instances of a possible 2
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:0 master score: 10150
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: master_color: Promoting p_Device_drive:0 (Master Cluster-Server-1)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:1 master score: 10100
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: master_color: Device_drive: Promoted 1 instances of a possible 1 to master
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to LVM_drive
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Target_iscsi2
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Lun_iscsi2
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Target_iscsi3 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Target_iscsi3
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: native_color: Resource Target_iscsi3 cannot run anywhere
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Lun_iscsi3 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Lun_iscsi3
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: native_color: Resource Lun_iscsi3 cannot run anywhere
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to FS_nfs2
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to ExportFS_nfs2
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource ExportFS_nfs1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for ExportFS_nfs1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource FS_nfs1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for FS_nfs1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Lun_iscsi1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Lun_iscsi1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Target_iscsi1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Target_iscsi1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: master_create_actions: Creating actions for Device_drive
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:0	(Started Cluster-Server-1)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:1	(Started Cluster-Server-2)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:0	(Started Cluster-Server-1)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:1	(Started Cluster-Server-2)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:0	(Started Cluster-Server-1)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:1	(Started Cluster-Server-2)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_Device_drive:0	(Master Cluster-Server-1)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_Device_drive:1	(Slave Cluster-Server-2)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   LVM_drive	(Started Cluster-Server-1)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Target_iscsi2	(Started Cluster-Server-1)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Lun_iscsi2	(Started Cluster-Server-1)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Target_iscsi3	(Stopped)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Lun_iscsi3	(Stopped)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   FS_nfs2	(Started Cluster-Server-1)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   ExportFS_nfs2	(Started Cluster-Server-1)
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: info: cib:diff: - <cib admin_epoch="0" epoch="19" num_updates="9" >
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: info: cib:diff: -   <configuration >
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: info: cib:diff: -     <crm_config >
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: info: cib:diff: -       <cluster_property_set id="cib-bootstrap-options" >
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: info: cib:diff: -         <nvpair value="1347283645" id="cib-bootstrap-options-last-lrm-refresh" />
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: info: cib:diff: -       </cluster_property_set>
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: info: cib:diff: -     </crm_config>
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: info: cib:diff: -   </configuration>
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: info: cib:diff: - </cib>
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: info: cib:diff: + <cib epoch="20" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="Cluster-Server-1" update-client="cibadmin" cib-last-written="Mon Sep 10 15:27:46 2012" have-quorum="1" dc-uuid="Cluster-Server-2" >
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: info: cib:diff: +   <configuration >
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: info: cib:diff: +     <crm_config >
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: info: cib:diff: +       <cluster_property_set id="cib-bootstrap-options" >
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: info: cib:diff: +         <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1347283668" />
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: info: cib:diff: +       </cluster_property_set>
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: info: cib:diff: +     </crm_config>
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: info: cib:diff: +   </configuration>
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: info: cib:diff: + </cib>
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=Cluster-Server-1/crmd/109, version=0.20.1): ok (rc=0)
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-1']//lrm_resource[@id='Lun_iscsi3'] (/cib/status/node_state[1]/lrm/lrm_resources/lrm_resource[14])
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-1']//lrm_resource[@id='Lun_iscsi3'] (origin=Cluster-Server-1/crmd/110, version=0.20.1): ok (rc=0)
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-2']//lrm_resource[@id='Target_iscsi3'] (/cib/status/node_state[2]/lrm/lrm_resources/lrm_resource[1])
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-2']//lrm_resource[@id='Target_iscsi3'] (origin=local/crmd/327, version=0.20.1): ok (rc=0)
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: info: delete_resource: Removing resource Target_iscsi3 for 61717_crm_resource (internal) on Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 lrmd: [40194]: debug: lrmd_rsc_destroy: removing resource Target_iscsi3
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: delete_rsc_entry: sync: Sending delete op for Target_iscsi3
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: info: notify_deleted: Notifying 61717_crm_resource on Cluster-Server-1 that Target_iscsi3 was deleted
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: WARN: decode_transition_key: Bad UUID (crm-resource-61717) in sscanf result (3) for 0:0:crm-resource-61717
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: send_direct_ack: Updating resouce Target_iscsi3 after complete delete op (interval=60000)
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: send_direct_ack: ACK'ing resource op Target_iscsi3_delete_60000 from 0:0:crm-resource-61717: lrm_invoke-lrmd-1347283668-204
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: notify_deleted: Triggering a refresh after 61717_crm_resource deleted Target_iscsi3 from the LRM
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-2']//lrm_resource[@id='Target_iscsi3'] (/cib/status/node_state[2]/lrm/lrm_resources/lrm_resource[1])
drbd(p_Device_drive:1)[7931]:	2012/09/10_15:27:48 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-2']//lrm_resource[@id='Target_iscsi3'] (origin=local/crmd/328, version=0.20.2): ok (rc=0)
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//cluster_property_set//nvpair[@name='last-lrm-refresh'] (/cib/configuration/crm_config/cluster_property_set/nvpair[8])
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: find_nvpair_attr_delegate: Match <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1347283668" />
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: notice: process_pe_message: Transition 25: PEngine Input stored in: /var/lib/pengine/pe-input-25.bz2
Sep 10 15:27:48 Cluster-Server-2 crm_attribute: [7961]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/330, version=0.20.3): ok (rc=0)
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-2']//lrm_resource[@id='Lun_iscsi3'] (/cib/status/node_state[2]/lrm/lrm_resources/lrm_resource[14])
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-2']//lrm_resource[@id='Lun_iscsi3'] (origin=local/crmd/331, version=0.20.3): ok (rc=0)
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: info: delete_resource: Removing resource Lun_iscsi3 for 61717_crm_resource (internal) on Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 lrmd: [40194]: debug: lrmd_rsc_destroy: removing resource Lun_iscsi3
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: delete_rsc_entry: sync: Sending delete op for Lun_iscsi3
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: info: notify_deleted: Notifying 61717_crm_resource on Cluster-Server-1 that Lun_iscsi3 was deleted
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: WARN: decode_transition_key: Bad UUID (crm-resource-61717) in sscanf result (3) for 0:0:crm-resource-61717
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: send_direct_ack: Updating resouce Lun_iscsi3 after complete delete op (interval=60000)
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: send_direct_ack: ACK'ing resource op Lun_iscsi3_delete_60000 from 0:0:crm-resource-61717: lrm_invoke-lrmd-1347283668-205
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: notify_deleted: Triggering a refresh after 61717_crm_resource deleted Lun_iscsi3 from the LRM
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: debug: sync_our_cib: Syncing CIB to Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=Cluster-Server-1/Cluster-Server-1/(null), version=0.20.3): ok (rc=0)
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-1']//lrm_resource[@id='Lun_iscsi3'] (/cib/status/node_state[1]/lrm/lrm_resources/lrm_resource[14])
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-1']//lrm_resource[@id='Lun_iscsi3'] (origin=Cluster-Server-1/crmd/111, version=0.20.4): ok (rc=0)
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=Cluster-Server-1/crmd/113, version=0.20.5): ok (rc=0)
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-2']//lrm_resource[@id='Lun_iscsi3'] (/cib/status/node_state[2]/lrm/lrm_resources/lrm_resource[14])
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-2']//lrm_resource[@id='Lun_iscsi3'] (origin=local/crmd/332, version=0.20.6): ok (rc=0)
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//cluster_property_set//nvpair[@name='last-lrm-refresh'] (/cib/configuration/crm_config/cluster_property_set/nvpair[8])
Sep 10 15:27:48 Cluster-Server-2 crm_attribute: [7961]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: find_nvpair_attr_delegate: Match <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1347283668" />
Sep 10 15:27:48 Cluster-Server-2 crm_attribute: [7961]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_delete): 0.19.8 -> 0.19.9 (S_POLICY_ENGINE)
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: No match for deleted action //diff-added//cib//lrm_rsc_op[@id='Target_iscsi3_last_0'] (Target_iscsi3_last_0 on Cluster-Server-1)
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:320 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=Target_iscsi3_last_0, magic=0:0;84:24:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66, cib=0.19.9) : Resource op removal
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.19.9 -> 0.20.1 (S_POLICY_ENGINE)
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.20.1) : Non-status change
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause <diff crm_feature_set="3.0.6" >
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   <diff-removed admin_epoch="0" epoch="19" num_updates="9" >
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     <cib admin_epoch="0" epoch="19" num_updates="9" >
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       <configuration >
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         <crm_config >
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <cluster_property_set id="cib-bootstrap-options" >
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <nvpair value="1347283645" id="cib-bootstrap-options-last-lrm-refresh" />
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           </cluster_property_set>
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         </crm_config>
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       </configuration>
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     </cib>
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   </diff-removed>
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   <diff-added >
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     <cib epoch="20" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="Cluster-Server-1" update-client="cibadmin" cib-last-written="Mon Sep 10 15:27:46 2012" have-quorum="1" dc-uuid="Cluster-Server-2" >
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       <configuration >
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         <crm_config >
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <cluster_property_set id="cib-bootstrap-options" >
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1347283668" />
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           </cluster_property_set>
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         </crm_config>
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       </configuration>
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     </cib>
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   </diff-added>
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause </diff>
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_delete): 0.20.1 -> 0.20.2 (S_POLICY_ENGINE)
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: No match for deleted action //diff-added//cib//lrm_rsc_op[@id='Lun_iscsi3_last_0'] (Lun_iscsi3_last_0 on Cluster-Server-1)
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:320 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=Lun_iscsi3_last_0, magic=0:0;85:24:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66, cib=0.20.2) : Resource op removal
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/334, version=0.20.7): ok (rc=0)
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_delete): 0.20.1 -> 0.20.2 (S_POLICY_ENGINE)
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: No match for deleted action //diff-added//cib//lrm_rsc_op[@id='Target_iscsi3_last_0'] (Target_iscsi3_last_0 on Cluster-Server-2)
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:320 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=Target_iscsi3_last_0, magic=0:7;17:14:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66, cib=0.20.2) : Resource op removal
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_delete): 0.20.1 -> 0.20.2 (S_POLICY_ENGINE)
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: No match for deleted action //diff-added//cib//lrm_rsc_op[@id='Target_iscsi3_last_0'] (Target_iscsi3_last_0 on Cluster-Server-2)
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:320 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=Target_iscsi3_last_0, magic=0:7;17:14:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66, cib=0.20.2) : Resource op removal
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.20.2 -> 0.20.3 (S_POLICY_ENGINE)
Sep 10 15:27:48 Cluster-Server-2 crm_attribute: [7961]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:48 Cluster-Server-2 crm_attribute: [7961]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:27:48 Cluster-Server-2 crm_attribute: [7961]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:27:48 Cluster-Server-2 crm_attribute: [7961]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_delete): 0.20.3 -> 0.20.4 (S_POLICY_ENGINE)
Sep 10 15:27:48 Cluster-Server-2 crm_attribute: [7961]: info: determine_host: Mapped Cluster-Server-2 to Cluster-Server-2
Sep 10 15:27:48 Cluster-Server-2 crm_attribute: [7961]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:27:48 Cluster-Server-2 crm_attribute: [7961]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:27:48 Cluster-Server-2 crm_attribute: [7961]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:1=10000 for Cluster-Server-2
Sep 10 15:27:48 Cluster-Server-2 crm_attribute: [7961]: info: attrd_lazy_update: Updated master-p_Device_drive:1=10000 for Cluster-Server-2
Sep 10 15:27:48 Cluster-Server-2 crm_attribute: [7961]: info: main: Update master-p_Device_drive:1=10000 sent via attrd
Sep 10 15:27:48 Cluster-Server-2 crm_attribute: [7961]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:27:48 Cluster-Server-2 crm_attribute: [7961]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:27:48 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:1=10000
Sep 10 15:27:48 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: No match for deleted action //diff-added//cib//lrm_rsc_op[@id='Lun_iscsi3_last_0'] (Lun_iscsi3_last_0 on Cluster-Server-2)
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:320 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=Lun_iscsi3_last_0, magic=0:7;18:14:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66, cib=0.20.4) : Resource op removal
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_delete): 0.20.3 -> 0.20.4 (S_POLICY_ENGINE)
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: No match for deleted action //diff-added//cib//lrm_rsc_op[@id='Lun_iscsi3_last_0'] (Lun_iscsi3_last_0 on Cluster-Server-1)
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:320 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=Lun_iscsi3_last_0, magic=0:0;85:24:0:81b7c738-e2a4-46c6-91bd-4df2c9c62d66, cib=0.20.4) : Resource op removal
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.20.4 -> 0.20.5 (S_POLICY_ENGINE)
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: debug: xmlfromIPC: Peer disconnected
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_delete): 0.20.5 -> 0.20.6 (S_POLICY_ENGINE)
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: No match for deleted action //diff-added//cib//lrm_rsc_op[@id='Lun_iscsi3_last_0'] (Lun_iscsi3_last_0 on Cluster-Server-2)
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:320 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=Lun_iscsi3_last_0, magic=0:7;18:14:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66, cib=0.20.6) : Resource op removal
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.20.6 -> 0.20.7 (S_POLICY_ENGINE)
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: info: handle_response: pe_calc calculation pe_calc-dc-1347283668-203 is obsolete
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 335: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 336: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 337: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 338: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 339: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 340: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 341: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 342: Requesting the current CIB: S_POLICY_ENGINE
drbd(p_Device_drive:1)[7931]:	2012/09/10_15:27:48 DEBUG: drive: Exit code 0
drbd(p_Device_drive:1)[7931]:	2012/09/10_15:27:48 DEBUG: drive: Command output: 
Sep 10 15:27:48 Cluster-Server-2 lrmd: [40194]: info: operation monitor[11] on p_Device_drive:1 for client 40197: pid 7931 exited with return code 0
Sep 10 15:27:48 Cluster-Server-2 lrmd: [40194]: debug: RA output: (p_Device_drive:1:monitor:stdout) 

Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke_callback: Invoking the PE: query=342, ref=pe_calc-dc-1347283668-206, seq=312, quorate=1
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Call 343 : Parsing CIB options
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Shutdown escalation occurs after: 1200000ms
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Checking for expired actions every 900000ms
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: unpack_config: Startup probes: enabled
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH timeout: 60000
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH of failed nodes is disabled
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Stop all active resources: false
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Default stickiness: 0
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: notice: unpack_config: On loss of CCM Quorum: Ignore
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: unpack_domains: Unpacking domains
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-1 is online
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-2 is online
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.20.7 -> 0.20.8 (S_POLICY_ENGINE)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: process_orphan_resource: Detected orphan resource ExportFS_nfs1 on Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: create_fake_resource: Orphan resource <primitive id="ExportFS_nfs1" type="exportfs" class="ocf" provider="nas" />
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: process_orphan_resource: Detected orphan resource FS_nfs1 on Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: create_fake_resource: Orphan resource <primitive id="FS_nfs1" type="Filesystem" class="ocf" provider="nas" />
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: LVM_drive_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource LVM_drive active on Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: process_orphan_resource: Detected orphan resource Lun_iscsi1 on Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: create_fake_resource: Orphan resource <primitive id="Lun_iscsi1" type="SCSTLun" class="ocf" provider="nas" />
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_Device_drive:0_last_failure_0 on Cluster-Server-1 returned 8 (master) instead of the expected value: 7 (not running)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Operation monitor found resource p_Device_drive:0 active in master mode on Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: process_orphan_resource: Detected orphan resource Target_iscsi1 on Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: create_fake_resource: Orphan resource <primitive id="Target_iscsi1" type="SCSTTarget" class="ocf" provider="nas" />
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: FS_nfs2_last_failure_0 on Cluster-Server-2 returned 5 (not installed) instead of the expected value: 7 (not running)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Preventing FS_nfs2 from re-starting on Cluster-Server-2: operation monitor failed 'not installed' (rc=5)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: ExportFS_nfs2_last_failure_0 on Cluster-Server-2 returned 5 (not installed) instead of the expected value: 7 (not running)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Preventing ExportFS_nfs2 from re-starting on Cluster-Server-2: operation monitor failed 'not installed' (rc=5)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: FS_nfs1_last_failure_0 on Cluster-Server-2 returned 5 (not installed) instead of the expected value: 7 (not running)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Preventing FS_nfs1 from re-starting on Cluster-Server-2: operation monitor failed 'not installed' (rc=5)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_Device_drive:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: ExportFS_nfs1_last_failure_0 on Cluster-Server-2 returned 5 (not installed) instead of the expected value: 7 (not running)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Preventing ExportFS_nfs1 from re-starting on Cluster-Server-2: operation monitor failed 'not installed' (rc=5)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: NFS_Server [p_NFS_Server]
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: iSCSI_Daemon [p_iSCSI_Daemon]
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: PingD [p_PingD]
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: clone_print:  Master/Slave Set: Device_drive [p_Device_drive]
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:0 active on Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:0 active on Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: short_print:      Masters: [ Cluster-Server-1 ]
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: short_print:      Slaves: [ Cluster-Server-2 ]
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: native_print: LVM_drive	(ocf::nas:LVM2):	Started Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: iSCSI_iscsi2
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: native_print:      Target_iscsi2	(ocf::nas:SCSTTarget):	Started Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: native_print:      Lun_iscsi2	(ocf::nas:SCSTLun):	Started Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: iSCSI_iscsi3
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: native_print:      Target_iscsi3	(ocf::nas:SCSTTarget):	Stopped 
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: native_print:      Lun_iscsi3	(ocf::nas:SCSTLun):	Stopped 
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: NFS_nfs2
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: native_print:      FS_nfs2	(ocf::nas:Filesystem):	Started Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: native_print:      ExportFS_nfs2	(ocf::nas:exportfs):	Started Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:0: preferring current location (node=Cluster-Server-1, weight=1)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:1: preferring current location (node=Cluster-Server-2, weight=1)
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.20.8 -> 0.20.9 (S_POLICY_ENGINE)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_NFS_Server:0
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_NFS_Server:1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 NFS_Server instances of a possible 2
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_iSCSI_Daemon:0
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_iSCSI_Daemon:1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 iSCSI_Daemon instances of a possible 2
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_PingD:0
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_PingD:1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 PingD instances of a possible 2
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_Device_drive:0
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_Device_drive:1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 Device_drive instances of a possible 2
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:0 master score: 10150
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: master_color: Promoting p_Device_drive:0 (Master Cluster-Server-1)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:1 master score: 10100
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: master_color: Device_drive: Promoted 1 instances of a possible 1 to master
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to LVM_drive
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Target_iscsi2
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Lun_iscsi2
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Target_iscsi3 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Target_iscsi3
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: native_color: Resource Target_iscsi3 cannot run anywhere
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Lun_iscsi3 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Lun_iscsi3
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: native_color: Resource Lun_iscsi3 cannot run anywhere
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to FS_nfs2
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to ExportFS_nfs2
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource ExportFS_nfs1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for ExportFS_nfs1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource FS_nfs1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for FS_nfs1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Lun_iscsi1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Lun_iscsi1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Target_iscsi1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Target_iscsi1
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_create_probe: Probing Target_iscsi3 on Cluster-Server-1 (Stopped)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_create_probe: Probing Lun_iscsi3 on Cluster-Server-1 (Stopped)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_create_probe: Probing Target_iscsi3 on Cluster-Server-2 (Stopped)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: native_create_probe: Probing Lun_iscsi3 on Cluster-Server-2 (Stopped)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: debug: master_create_actions: Creating actions for Device_drive
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:0	(Started Cluster-Server-1)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:1	(Started Cluster-Server-2)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:0	(Started Cluster-Server-1)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:1	(Started Cluster-Server-2)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:0	(Started Cluster-Server-1)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:1	(Started Cluster-Server-2)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_Device_drive:0	(Master Cluster-Server-1)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_Device_drive:1	(Slave Cluster-Server-2)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   LVM_drive	(Started Cluster-Server-1)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Target_iscsi2	(Started Cluster-Server-1)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Lun_iscsi2	(Started Cluster-Server-1)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Target_iscsi3	(Stopped)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Lun_iscsi3	(Stopped)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   FS_nfs2	(Started Cluster-Server-1)
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   ExportFS_nfs2	(Started Cluster-Server-1)
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.20.9 -> 0.20.10 (S_POLICY_ENGINE)
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_PE_SUCCESS from handle_response() received in state S_POLICY_ENGINE
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log <create_reply_adv origin="process_pe_message" t="crmd" version="3.0.6" subt="response" reference="pe_calc-dc-1347283668-206" crm_task="pe_calc" crm_sys_to="dc" crm_sys_from="pengine" crm-tgraph-in="/var/lib/pengine/pe-input-26.bz2" graph-errors="0" graph-warnings="0" config-errors="0" config-warnings="0" >
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   <crm_xml >
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log     <transition_graph cluster-delay="60s" stonith-timeout="60s" failed-stop-offset="INFINITY" failed-start-offset="INFINITY" batch-limit="30" transition_id="26" >
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="0" >
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="19" operation="monitor" operation_key="Target_iscsi3_monitor_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="Target_iscsi3" long-id="iSCSI_iscsi3:Target_iscsi3" class="ocf" provider="nas" type="SCSTTarget" />
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" crm_feature_set="3.0.6" iqn="iqn.2005-07.com.example:vdisk.iscsi3" />
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="1" >
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="16" operation="monitor" operation_key="Target_iscsi3_monitor_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="Target_iscsi3" long-id="iSCSI_iscsi3:Target_iscsi3" class="ocf" provider="nas" type="SCSTTarget" />
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" crm_feature_set="3.0.6" iqn="iqn.2005-07.com.example:vdisk.iscsi3" />
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="2" >
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="20" operation="monitor" operation_key="Lun_iscsi3_monitor_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="Lun_iscsi3" long-id="iSCSI_iscsi3:Lun_iscsi3" class="ocf" provider="nas" type="SCSTLun" />
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" crm_feature_set="3.0.6" device_name="iscsi3" handler="vdisk_blockio" lun="0" path="/dev/drive-CSD/iscsi3_iSCSI" target_iqn="iqn.2005-07.com.example:vdisk.iscsi3" />
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="3" >
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='master-p_Device_drive:0'] does not exist
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="17" operation="monitor" operation_key="Lun_iscsi3_monitor_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <primitive id="Lun_iscsi3" long-id="iSCSI_iscsi3:Lun_iscsi3" class="ocf" provider="nas" type="SCSTLun" />
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" crm_feature_set="3.0.6" device_name="iscsi3" handler="vdisk_blockio" lun="0" path="/dev/drive-CSD/iscsi3_iSCSI" target_iqn="iqn.2005-07.com.example:vdisk.iscsi3" />
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs />
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="4" priority="1000000" >
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="18" operation="probe_complete" operation_key="probe_complete" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" >
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_no_wait="true" crm_feature_set="3.0.6" />
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="19" operation="monitor" operation_key="Target_iscsi3_monitor_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" />
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="20" operation="monitor" operation_key="Lun_iscsi3_monitor_0" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" />
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="5" priority="1000000" >
Sep 10 15:27:48 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update -22 for master-p_Device_drive:0=(null) passed
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <rsc_op id="15" operation="probe_complete" operation_key="probe_complete" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" >
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes CRM_meta_op_no_wait="true" crm_feature_set="3.0.6" />
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </rsc_op>
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="16" operation="monitor" operation_key="Target_iscsi3_monitor_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="17" operation="monitor" operation_key="Lun_iscsi3_monitor_0" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       <synapse id="6" >
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <action_set >
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <pseudo_event id="14" operation="probe_complete" operation_key="probe_complete" >
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <attributes crm_feature_set="3.0.6" />
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </pseudo_event>
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </action_set>
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         <inputs >
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="15" operation="probe_complete" operation_key="probe_complete" on_node="Cluster-Server-1" on_node_uuid="Cluster-Server-1" />
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:48 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-pingd" name="pingd" value="100" />
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           <trigger >
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log             <rsc_op id="18" operation="probe_complete" operation_key="probe_complete" on_node="Cluster-Server-2" on_node_uuid="Cluster-Server-2" />
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log           </trigger>
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log         </inputs>
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log       </synapse>
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log     </transition_graph>
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   </crm_xml>
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log </create_reply_adv>
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_INVOKE
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: unpack_graph: Unpacked transition 26: 7 actions in 7 synapses
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: info: do_te_invoke: Processing graph 26 (ref=pe_calc-dc-1347283668-206) derived from /var/lib/pengine/pe-input-26.bz2
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 19: monitor Target_iscsi3_monitor_0 on Cluster-Server-2 (local)
Sep 10 15:27:48 Cluster-Server-2 lrmd: [40194]: debug: on_msg_add_rsc:client [40197] adds resource Target_iscsi3
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_lrm_rsc_op: Performing key=19:26:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=Target_iscsi3_monitor_0
Sep 10 15:27:48 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op:2399: copying parameters for rsc Target_iscsi3
Sep 10 15:27:48 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op: add an operation operation monitor[26] on Target_iscsi3 for client 40197, its parameters: crm_feature_set=[3.0.6] iqn=[iqn.2005-07.com.example:vdisk.iscsi3] CRM_meta_timeout=[20000]  to the operation list.
Sep 10 15:27:48 Cluster-Server-2 lrmd: [40194]: info: rsc:Target_iscsi3 probe[26] (pid 7968)
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 16: monitor Target_iscsi3_monitor_0 on Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 20: monitor Lun_iscsi3_monitor_0 on Cluster-Server-2 (local)
Sep 10 15:27:48 Cluster-Server-2 lrmd: [40194]: debug: on_msg_add_rsc:client [40197] adds resource Lun_iscsi3
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_lrm_rsc_op: Performing key=20:26:7:81b7c738-e2a4-46c6-91bd-4df2c9c62d66 op=Lun_iscsi3_monitor_0
Sep 10 15:27:48 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op:2399: copying parameters for rsc Lun_iscsi3
Sep 10 15:27:48 Cluster-Server-2 lrmd: [40194]: debug: on_msg_perform_op: add an operation operation monitor[27] on Lun_iscsi3 for client 40197, its parameters: path=[/dev/drive-CSD/iscsi3_iSCSI] crm_feature_set=[3.0.6] lun=[0] handler=[vdisk_blockio] device_name=[iscsi3] CRM_meta_timeout=[20000] target_iqn=[iqn.2005-07.com.example:vdisk.iscsi3]  to the operation list.
Sep 10 15:27:48 Cluster-Server-2 lrmd: [40194]: info: rsc:Lun_iscsi3 probe[27] (pid 7969)
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 17: monitor Lun_iscsi3_monitor_0 on Cluster-Server-1
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 26 (Complete=0, Pending=4, Fired=4, Skipped=0, Incomplete=3, Source=/var/lib/pengine/pe-input-26.bz2): In-progress
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.20.10 -> 0.20.11 (S_TRANSITION_ENGINE)
Sep 10 15:27:48 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:27:48 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-probe_complete" name="probe_complete" value="true" />
Sep 10 15:27:48 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 191 for pingd=100 passed
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.20.11 -> 0.20.12 (S_TRANSITION_ENGINE)
Sep 10 15:27:48 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 193 for probe_complete=true passed
Sep 10 15:27:48 Cluster-Server-2 pengine: [40196]: notice: process_pe_message: Transition 26: PEngine Input stored in: /var/lib/pengine/pe-input-26.bz2
SCSTTarget(Target_iscsi3)[7968]:	2012/09/10_15:27:48 DEBUG: Target_iscsi3 monitor : 7
Sep 10 15:27:48 Cluster-Server-2 lrmd: [40194]: WARN: Managed Target_iscsi3:monitor process 7968 exited with return code 7.
Sep 10 15:27:48 Cluster-Server-2 lrmd: [40194]: info: operation monitor[26] on Target_iscsi3 for client 40197: pid 7968 exited with return code 7
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: do_update_resource: Updating resouce Target_iscsi3 after complete monitor op (interval=0)
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: info: process_lrm_event: LRM operation Target_iscsi3_monitor_0 (call=26, rc=7, cib-update=344, confirmed=true) not running
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: update_history_cache: Appending monitor op to history for 'Target_iscsi3'
SCSTLun(Lun_iscsi3)[7969]:	2012/09/10_15:27:48 INFO: Lun_iscsi3 monitor : 7
SCSTLun(Lun_iscsi3)[7969]:	2012/09/10_15:27:48 INFO: Lun_iscsi3 monitor : 7
Sep 10 15:27:48 Cluster-Server-2 lrmd: [40194]: WARN: Managed Lun_iscsi3:monitor process 7969 exited with return code 7.
Sep 10 15:27:48 Cluster-Server-2 lrmd: [40194]: info: operation monitor[27] on Lun_iscsi3 for client 40197: pid 7969 exited with return code 7
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: do_update_resource: Updating resouce Lun_iscsi3 after complete monitor op (interval=0)
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: info: process_lrm_event: LRM operation Lun_iscsi3_monitor_0 (call=27, rc=7, cib-update=345, confirmed=true) not running
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: update_history_cache: Appending monitor op to history for 'Lun_iscsi3'
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.20.12 -> 0.20.13 (S_TRANSITION_ENGINE)
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action Target_iscsi3_monitor_0 (19) confirmed on Cluster-Server-2 (rc=0)
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 26 (Complete=1, Pending=3, Fired=0, Skipped=0, Incomplete=3, Source=/var/lib/pengine/pe-input-26.bz2): In-progress
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.20.13 -> 0.20.14 (S_TRANSITION_ENGINE)
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action Lun_iscsi3_monitor_0 (20) confirmed on Cluster-Server-2 (rc=0)
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 18: probe_complete probe_complete on Cluster-Server-2 (local) - no waiting
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: attrd_update_delegate: Sent update: probe_complete=true for localhost
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 26 (Complete=2, Pending=2, Fired=1, Skipped=0, Incomplete=2, Source=/var/lib/pengine/pe-input-26.bz2): In-progress
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 26 (Complete=3, Pending=2, Fired=0, Skipped=0, Incomplete=2, Source=/var/lib/pengine/pe-input-26.bz2): In-progress
Sep 10 15:27:48 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from crmd: probe_complete=true
Sep 10 15:27:48 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: true, Current: true, Stored: true
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.20.14 -> 0.20.15 (S_TRANSITION_ENGINE)
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action Target_iscsi3_monitor_0 (16) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 26 (Complete=4, Pending=1, Fired=0, Skipped=0, Incomplete=2, Source=/var/lib/pengine/pe-input-26.bz2): In-progress
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.20.15 -> 0.20.16 (S_TRANSITION_ENGINE)
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: match_graph_event: Action Lun_iscsi3_monitor_0 (17) confirmed on Cluster-Server-1 (rc=0)
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: info: te_rsc_command: Initiating action 15: probe_complete probe_complete on Cluster-Server-1 - no waiting
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: te_pseudo_action: Pseudo action 14 fired and confirmed
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: run_graph: ==== Transition 26 (Complete=5, Pending=0, Fired=2, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-26.bz2): In-progress
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: notice: run_graph: ==== Transition 26 (Complete=7, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-26.bz2): Complete
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: te_graph_trigger: Transition 26 is now complete
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Transition 26 status: done - <null>
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_TE_SUCCESS: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_TE_SUCCESS from notify_crmd() received in state S_TRANSITION_ENGINE
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: Starting PEngine Recheck Timer
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started PEngine Recheck Timer (I_PE_CALC:900000ms), src=395
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:27:48 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:27:49 Cluster-Server-1 lrmd: [48712]: debug: rsc:FS_nfs2 monitor[43] (pid 61773)
Sep 10 15:27:49 Cluster-Server-1 lrmd: [48712]: info: operation monitor[43] on FS_nfs2 for client 48715: pid 61773 exited with return code 0
Sep 10 15:27:49 Cluster-Server-1 lrmd: [48712]: debug: rsc:ExportFS_nfs2 monitor[45] (pid 61809)
exportfs(ExportFS_nfs2)[61809]:	2012/09/10_15:27:49 INFO: Directory /volumes/nfs2 is exported to * (started).
exportfs(ExportFS_nfs2)[61809]:	2012/09/10_15:27:49 INFO: Directory /volumes/nfs2 is exported to * (started).
Sep 10 15:27:49 Cluster-Server-1 lrmd: [48712]: info: operation monitor[45] on ExportFS_nfs2 for client 48715: pid 61809 exited with return code 0
Sep 10 15:27:49 Cluster-Server-1 cib: [48709]: debug: xmlfromIPC: Peer disconnected
Sep 10 15:27:49 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [61830] registered
Sep 10 15:27:49 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:61830] disconnected.
Sep 10 15:27:49 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:61830] is unregistered
Sep 10 15:27:49 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [61832] registered
Sep 10 15:27:49 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:61832] disconnected.
Sep 10 15:27:49 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:61832] is unregistered
Sep 10 15:27:49 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [61834] registered
Sep 10 15:27:49 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:61834] disconnected.
Sep 10 15:27:49 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:61834] is unregistered
Sep 10 15:27:49 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [61836] registered
Sep 10 15:27:49 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:61836] disconnected.
Sep 10 15:27:49 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:61836] is unregistered
Sep 10 15:27:49 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [61845] registered
Sep 10 15:27:49 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:61845] disconnected.
Sep 10 15:27:49 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:61845] is unregistered
Sep 10 15:27:49 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [61854] registered
Sep 10 15:27:49 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:61854] disconnected.
Sep 10 15:27:49 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:61854] is unregistered
Sep 10 15:27:49 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [61861] registered
Sep 10 15:27:49 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:61861] disconnected.
Sep 10 15:27:49 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:61861] is unregistered
Sep 10 15:27:49 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [61868] registered
Sep 10 15:27:49 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:61868] disconnected.
Sep 10 15:27:49 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:61868] is unregistered
Sep 10 15:27:49 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [61875] registered
Sep 10 15:27:49 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:61875] disconnected.
Sep 10 15:27:49 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:61875] is unregistered
Sep 10 15:27:49 Cluster-Server-1 lrmd: [48712]: debug: on_msg_register:client lrmadmin [61883] registered
Sep 10 15:27:49 Cluster-Server-1 lrmd: [48712]: debug: on_receive_cmd: the IPC to client [pid:61883] disconnected.
Sep 10 15:27:49 Cluster-Server-1 lrmd: [48712]: debug: unregister_client: client lrmadmin [pid:61883] is unregistered
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: crm_uptime: Current CPU usage is: 0s, 140000us
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: crm_compare_age: Loose: 140000 vs 590000 (usec)
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: info: do_election_count_vote: Election 14 (owner: Cluster-Server-2) lost: vote from Cluster-Server-2 (Uptime)
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: update_dc: Unset DC. Was Cluster-Server-2
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: do_election_check: Ignore election check: we not in an election
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_PENDING: [ state=S_NOT_DC cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: do_log: FSA: Input I_PENDING from do_election_count_vote() received in state S_NOT_DC
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: notice: do_state_transition: State transition S_NOT_DC -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:27:49 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_Device_drive:0 monitor[11] (pid 61892)
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_START
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: crm_timer_start: Started Election Trigger (I_DC_TIMEOUT:20000ms), src=96
drbd(p_Device_drive:0)[61892]:	2012/09/10_15:27:49 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: handle_request: Raising I_JOIN_OFFER: join-12
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_JOIN_OFFER: [ state=S_PENDING cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_REQUEST
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: info: update_dc: Set DC to Cluster-Server-2 (3.0.6)
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: do_cl_join_offer_respond: do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: join_query_callback: Respond to join offer join-12
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: join_query_callback: Acknowledging Cluster-Server-2 as our DC
Sep 10 15:27:49 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='master-p_Device_drive:1'] does not exist
Sep 10 15:27:49 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update -22 for master-p_Device_drive:1=(null) passed
Sep 10 15:27:49 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:27:49 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-pingd" name="pingd" value="100" />
Sep 10 15:27:49 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:27:49 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-probe_complete" name="probe_complete" value="true" />
Sep 10 15:27:49 Cluster-Server-1 crm_attribute: [61925]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: handle_request: Raising I_JOIN_RESULT: join-12
Sep 10 15:27:49 Cluster-Server-1 cib: [48709]: debug: xmlfromIPC: Peer disconnected
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_PENDING cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: do_cl_join_finalize_respond: Confirming join join-12: join_ack_nack
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce FS_nfs2 after complete start op (interval=0)
Sep 10 15:27:49 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 197 for pingd=100 passed
Sep 10 15:27:49 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 199 for probe_complete=true passed
Sep 10 15:27:49 Cluster-Server-1 crm_attribute: [61925]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:27:49 Cluster-Server-1 crm_attribute: [61925]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce FS_nfs2 after complete monitor op (interval=20000)
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce ExportFS_nfs1 after complete monitor op (interval=0)
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce ExportFS_nfs2 after complete start op (interval=0)
Sep 10 15:27:49 Cluster-Server-1 crm_attribute: [61925]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:27:49 Cluster-Server-1 crm_attribute: [61925]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:27:49 Cluster-Server-1 crm_attribute: [61925]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:27:49 Cluster-Server-1 crm_attribute: [61925]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:27:49 Cluster-Server-1 crm_attribute: [61925]: info: determine_host: Mapped Cluster-Server-1 to Cluster-Server-1
Sep 10 15:27:49 Cluster-Server-1 crm_attribute: [61925]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:27:49 Cluster-Server-1 crm_attribute: [61925]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:27:49 Cluster-Server-1 crm_attribute: [61925]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:27:49 Cluster-Server-1 crm_attribute: [61925]: info: attrd_lazy_update: Updated master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:27:49 Cluster-Server-1 crm_attribute: [61925]: info: main: Update master-p_Device_drive:0=10000 sent via attrd
Sep 10 15:27:49 Cluster-Server-1 crm_attribute: [61925]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:27:49 Cluster-Server-1 crm_attribute: [61925]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:27:49 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:0=10000
Sep 10 15:27:49 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
Sep 10 15:27:49 Cluster-Server-1 cib: [48709]: debug: xmlfromIPC: Peer disconnected
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce ExportFS_nfs2 after complete monitor op (interval=10000)
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_NFS_Server:0 after complete start op (interval=0)
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_NFS_Server:0 after complete monitor op (interval=30000)
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce FS_nfs1 after complete monitor op (interval=0)
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce LVM_drive after complete monitor op (interval=0)
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Lun_iscsi1 after complete monitor op (interval=0)
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_PingD:0 after complete monitor op (interval=0)
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_PingD:0 after complete monitor op (interval=10000)
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Lun_iscsi2 after complete start op (interval=0)
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Lun_iscsi2 after complete monitor op (interval=10000)
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_Device_drive:0 after complete monitor op (interval=0)
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_Device_drive:0 after complete monitor op (interval=10000)
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_iSCSI_Daemon:0 after complete monitor op (interval=0)
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce p_iSCSI_Daemon:0 after complete monitor op (interval=30000)
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Target_iscsi2 after complete start op (interval=0)
drbd(p_Device_drive:0)[61892]:	2012/09/10_15:27:49 DEBUG: drive: Exit code 0
drbd(p_Device_drive:0)[61892]:	2012/09/10_15:27:49 DEBUG: drive: Command output: 
Sep 10 15:27:49 Cluster-Server-1 lrmd: [48712]: info: operation monitor[11] on p_Device_drive:0 for client 48715: pid 61892 exited with return code 8
Sep 10 15:27:49 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_Device_drive:0:monitor:stdout) 

Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Target_iscsi2 after complete monitor op (interval=10000)
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Target_iscsi3 after complete monitor op (interval=0)
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Target_iscsi1 after complete monitor op (interval=0)
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: create_operation_update: build_active_RAs: Updating resouce Lun_iscsi3 after complete monitor op (interval=0)
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: do_cl_join_finalize_respond: join-12: Join complete.  Sending local LRM status to Cluster-Server-2
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: attrd_update_delegate: Sent update: (null)=(null) for localhost
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: s_crmd_fsa: Processing I_NOT_DC: [ state=S_PENDING cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: do_log: FSA: Input I_NOT_DC from do_cl_join_finalize_respond() received in state S_PENDING
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: notice: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:27:49 Cluster-Server-1 crmd: [48715]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:27:49 Cluster-Server-1 attrd: [48713]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
Sep 10 15:27:49 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_Device_drive:0 (10000)
Sep 10 15:27:49 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='master-p_Device_drive:0'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[3])
Sep 10 15:27:49 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-master-p_Device_drive.0" name="master-p_Device_drive:0" value="10000" />
Sep 10 15:27:49 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: pingd (100)
Sep 10 15:27:49 Cluster-Server-1 attrd: [48713]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Sep 10 15:27:49 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:27:49 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-probe_complete" name="probe_complete" value="true" />
Sep 10 15:27:49 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:27:49 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-pingd" name="pingd" value="100" />
Sep 10 15:27:49 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 201 for master-p_Device_drive:0=10000 passed
Sep 10 15:27:49 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 203 for probe_complete=true passed
Sep 10 15:27:49 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 205 for pingd=100 passed
Sep 10 15:27:49 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='master-p_Device_drive:1'] does not exist
Sep 10 15:27:49 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update -22 for master-p_Device_drive:1=(null) passed
Sep 10 15:27:49 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:27:49 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-pingd" name="pingd" value="100" />
Sep 10 15:27:49 Cluster-Server-1 cib: [48709]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-1']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:27:49 Cluster-Server-1 attrd: [48713]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-1-probe_complete" name="probe_complete" value="true" />
Sep 10 15:27:49 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 208 for pingd=100 passed
Sep 10 15:27:49 Cluster-Server-1 attrd: [48713]: debug: attrd_cib_callback: Update 210 for probe_complete=true passed
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_replace): 0.20.16 -> 0.21.1 (S_IDLE)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.21.1) : Non-status change
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause <diff crm_feature_set="3.0.6" >
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   <diff-removed admin_epoch="0" epoch="20" num_updates="16" >
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     <cib admin_epoch="0" epoch="20" num_updates="16" >
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       <configuration >
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         <resources >
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <group id="iSCSI_iscsi3" __crm_diff_marker__="removed:top" >
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <primitive class="ocf" id="Target_iscsi3" provider="nas" type="SCSTTarget" >
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <instance_attributes id="Target_iscsi3-instance_attributes" >
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="Target_iscsi3-instance_attributes-iqn" name="iqn" value="iqn.2005-07.com.example:vdisk.iscsi3" />
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </instance_attributes>
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <operations >
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="Target_iscsi3-monitor-10" interval="10" name="monitor" timeout="60" />
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="Target_iscsi3-start-0" interval="0" name="start" timeout="240" />
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib_replace_notify: Replaced: 0.20.16 -> 0.21.1 from Cluster-Server-1
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="Target_iscsi3-stop-0" interval="0" name="stop" timeout="240" />
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </operations>
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <meta_attributes id="Target_iscsi3-meta_attributes" >
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </meta_attributes>
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             </primitive>
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <primitive class="ocf" id="Lun_iscsi3" provider="nas" type="SCSTLun" >
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <instance_attributes id="Lun_iscsi3-instance_attributes" >
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="Lun_iscsi3-instance_attributes-target_iqn" name="target_iqn" value="iqn.2005-07.com.example:vdisk.iscsi3" />
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="Lun_iscsi3-instance_attributes-lun" name="lun" value="0" />
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="Lun_iscsi3-instance_attributes-path" name="path" value="/dev/drive-CSD/iscsi3_iSCSI" />
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="Lun_iscsi3-instance_attributes-device_name" name="device_name" value="iscsi3" />
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <nvpair id="Lun_iscsi3-instance_attributes-handler" name="handler" value="vdisk_blockio" />
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </instance_attributes>
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <operations >
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="Lun_iscsi3-monitor-10" interval="10" name="monitor" />
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="Lun_iscsi3-start-0" interval="0" name="start" timeout="60" />
Sep 10 15:27:49 Cluster-Server-2 attrd: [40195]: info: do_cib_replaced: Sending full refresh
Sep 10 15:27:49 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_Device_drive:1 (10000)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause                 <op id="Lun_iscsi3-stop-0" interval="0" name="stop" timeout="240" />
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </operations>
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <meta_attributes id="Lun_iscsi3-meta_attributes" >
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               </meta_attributes>
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             </primitive>
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             <meta_attributes id="iSCSI_iscsi3-meta_attributes" >
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: - <cib admin_epoch="0" epoch="20" num_updates="16" >
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause               <nvpair id="iSCSI_iscsi3-meta_attributes-target-role" name="target-role" value="Stopped" />
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause             </meta_attributes>
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: -   <configuration >
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           </group>
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         </resources>
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: -     <resources >
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         <constraints >
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <rsc_order first="LVM_drive" id="iSCSI_iscsi3_after_LVM_drive" score="INFINITY" then="iSCSI_iscsi3" __crm_diff_marker__="removed:top" />
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <rsc_order first="iSCSI_Daemon" id="iSCSI_iscsi3_after_iSCSI_Daemon" score="INFINITY" then="iSCSI_iscsi3" __crm_diff_marker__="removed:top" />
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: -       <group id="iSCSI_iscsi3" __crm_diff_marker__="removed:top" >
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: -         <primitive class="ocf" id="Target_iscsi3" provider="nas" type="SCSTTarget" >
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <rsc_colocation id="iSCSI_iscsi3_with_LVM_drive" rsc="iSCSI_iscsi3" score="INFINITY" with-rsc="LVM_drive" __crm_diff_marker__="removed:top" />
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: -           <instance_attributes id="Target_iscsi3-instance_attributes" >
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause           <rsc_colocation id="iSCSI_iscsi3_with_iSCSI_Daemon" rsc="iSCSI_iscsi3" score="INFINITY" with-rsc="iSCSI_Daemon" __crm_diff_marker__="removed:top" />
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <nvpair id="Target_iscsi3-instance_attributes-iqn" name="iqn" value="iqn.2005-07.com.example:vdisk.iscsi3" />
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause         </constraints>
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause       </configuration>
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     </cib>
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   </diff-removed>
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   <diff-added >
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause     <cib epoch="21" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="Cluster-Server-1" update-client="crmd" cib-last-written="Mon Sep 10 15:27:48 2012" have-quorum="1" dc-uuid="Cluster-Server-2" />
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause   </diff-added>
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: abort_transition_graph: Cause </diff>
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: -           </instance_attributes>
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: -           <operations >
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <op id="Target_iscsi3-monitor-10" interval="10" name="monitor" timeout="60" />
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <op id="Target_iscsi3-start-0" interval="0" name="start" timeout="240" />
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_cib_replaced: Updating the CIB after a replace: DC=true
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <op id="Target_iscsi3-stop-0" interval="0" name="stop" timeout="240" />
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: -           </operations>
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: -           <meta_attributes id="Target_iscsi3-meta_attributes" >
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-1: true (overwrite=true) hash_size=2
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-2: true (overwrite=true) hash_size=2
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: -           </meta_attributes>
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: -         </primitive>
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_IDLE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: -         <primitive class="ocf" id="Lun_iscsi3" provider="nas" type="SCSTLun" >
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes are eligible to run resources.
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: -           <instance_attributes id="Lun_iscsi3-instance_attributes" >
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <nvpair id="Lun_iscsi3-instance_attributes-target_iqn" name="target_iqn" value="iqn.2005-07.com.example:vdisk.iscsi3" />
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <nvpair id="Lun_iscsi3-instance_attributes-lun" name="lun" value="0" />
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <nvpair id="Lun_iscsi3-instance_attributes-path" name="path" value="/dev/drive-CSD/iscsi3_iSCSI" />
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 348: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <nvpair id="Lun_iscsi3-instance_attributes-device_name" name="device_name" value="iscsi3" />
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <nvpair id="Lun_iscsi3-instance_attributes-handler" name="handler" value="vdisk_blockio" />
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: -           </instance_attributes>
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: update_dc: Unset DC. Was Cluster-Server-2
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: -           <operations >
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <op id="Lun_iscsi3-monitor-10" interval="10" name="monitor" />
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_VOTE
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <op id="Lun_iscsi3-start-0" interval="0" name="start" timeout="60" />
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: crm_uptime: Current CPU usage is: 0s, 590000us
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_election_vote: Started election 14
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started Election Timeout (I_ELECTION_DC:120000ms), src=399
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: -             <op id="Lun_iscsi3-stop-0" interval="0" name="stop" timeout="240" />
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: -           </operations>
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: -           <meta_attributes id="Lun_iscsi3-meta_attributes" >
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_election_count_vote: Created voted hash
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: -           </meta_attributes>
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: crm_uptime: Current CPU usage is: 0s, 590000us
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_election_count_vote: Election 14 (current: 14, owner: Cluster-Server-2): Processed vote from Cluster-Server-2 (Recorded)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: -         </primitive>
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: -         <meta_attributes id="iSCSI_iscsi3-meta_attributes" >
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_election_check: Still waiting on 1 non-votes (2 total)
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: -           <nvpair id="iSCSI_iscsi3-meta_attributes-target-role" name="target-role" value="Stopped" />
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: -         </meta_attributes>
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: -       </group>
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: -     </resources>
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: -     <constraints >
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: -       <rsc_order first="LVM_drive" id="iSCSI_iscsi3_after_LVM_drive" score="INFINITY" then="iSCSI_iscsi3" __crm_diff_marker__="removed:top" />
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: -       <rsc_order first="iSCSI_Daemon" id="iSCSI_iscsi3_after_iSCSI_Daemon" score="INFINITY" then="iSCSI_iscsi3" __crm_diff_marker__="removed:top" />
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: -       <rsc_colocation id="iSCSI_iscsi3_with_LVM_drive" rsc="iSCSI_iscsi3" score="INFINITY" with-rsc="LVM_drive" __crm_diff_marker__="removed:top" />
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: -       <rsc_colocation id="iSCSI_iscsi3_with_iSCSI_Daemon" rsc="iSCSI_iscsi3" score="INFINITY" with-rsc="iSCSI_Daemon" __crm_diff_marker__="removed:top" />
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: -     </constraints>
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: -   </configuration>
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: - </cib>
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib:diff: + <cib epoch="21" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="Cluster-Server-1" update-client="crmd" cib-last-written="Mon Sep 10 15:27:48 2012" have-quorum="1" dc-uuid="Cluster-Server-2" />
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=Cluster-Server-1/cibadmin/2, version=0.21.1): ok (rc=0)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_COUNT
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: crm_uptime: Current CPU usage is: 0s, 600000us
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: crm_compare_age: Win: 600000 vs 0  (usec)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_election_count_vote: Election 14 (current: 14, owner: Cluster-Server-2): Processed no-vote from Cluster-Server-1 (Recorded)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_ELECTION_CHECK
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_election_check: Destroying voted hash
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_ELECTION_DC: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_election_check ]
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_ELECTION_DC from do_election_check() received in state S_ELECTION
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_START
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_te_control: The transitioner is already active
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_START
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pengine
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_START
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started Integration Timer (I_INTEGRATED:180000ms), src=401
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TAKEOVER
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: info: do_dc_takeover: Taking over DC status for this partition
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/346, version=0.21.2): ok (rc=0)
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: debug: cib_process_readwrite: We are still in R/W mode
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/349, version=0.21.4): ok (rc=0)
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/350, version=0.21.5): ok (rc=0)
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='master-p_Device_drive:1'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[3])
Sep 10 15:27:49 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-master-p_Device_drive.1" name="master-p_Device_drive:1" value="10000" />
Sep 10 15:27:49 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: pingd (100)
Sep 10 15:27:49 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//cluster_property_set//nvpair[@name='dc-version'] (/cib/configuration/crm_config/cluster_property_set/nvpair[1])
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: find_nvpair_attr_delegate: Match <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff" />
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:27:49 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-probe_complete" name="probe_complete" value="true" />
Sep 10 15:27:49 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 195 for master-p_Device_drive:1=10000 passed
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:27:49 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-pingd" name="pingd" value="100" />
Sep 10 15:27:49 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 197 for probe_complete=true passed
Sep 10 15:27:49 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 199 for pingd=100 passed
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/352, version=0.21.9): ok (rc=0)
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//cluster_property_set//nvpair[@name='cluster-infrastructure'] (/cib/configuration/crm_config/cluster_property_set/nvpair[2])
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: find_nvpair_attr_delegate: Match <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="cman" />
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_OFFER_ALL
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: initialize_join: join-12: Initializing join data (flag=true)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: join_make_offer: join-12: Sending offer to Cluster-Server-1
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: join_make_offer: join-12: Sending offer to Cluster-Server-2
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: info: do_dc_join_offer_all: join-12: Waiting on 2 outstanding join acks
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke_callback: Discarding PE request in state: S_INTEGRATION
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/354, version=0.21.10): ok (rc=0)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: handle_request: Raising I_JOIN_OFFER: join-12
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_OFFER: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_REQUEST
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: info: update_dc: Set DC to Cluster-Server-2 (3.0.6)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_cl_join_offer_respond: do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Call 355 : Parsing CIB options
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Shutdown escalation occurs after: 1200000ms
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: config_query_callback: Checking for expired actions every 900000ms
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: join_query_callback: Respond to join offer join-12
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: join_query_callback: Acknowledging Cluster-Server-2 as our DC
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_REQ
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: Processing req from Cluster-Server-2
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: join-12: Welcoming node Cluster-Server-2 (ref join_request-crmd-1347283669-216)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: 1 nodes have been integrated into join-12
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: join-12: Still waiting on 1 outstanding offers
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_REQ
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: Processing req from Cluster-Server-1
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: join-12: Welcoming node Cluster-Server-1 (ref join_request-crmd-1347283669-46)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_filter_offer: 2 nodes have been integrated into join-12
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: check_join_state: join-12: Integration of 2 peers complete: do_dc_join_filter_offer
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_INTEGRATED: [ state=S_INTEGRATION cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes responded to the join offer.
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_START
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started Finalization Timer (I_ELECTION:1800000ms), src=405
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_FINALIZE
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_finalize: Finializing join-12 for 2 clients
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: info: do_dc_join_finalize: join-12: Syncing the CIB from Cluster-Server-2 to the rest of the cluster
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: debug: sync_our_cib: Syncing CIB to all peers
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/357, version=0.21.10): ok (rc=0)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by finalize_sync_callback in state: S_FINALIZE_JOIN
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: check_join_state: join-12: Still waiting on 2 integrated nodes
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: finalize_sync_callback: Notifying 2 clients of join-12 results
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: finalize_join_for: join-12: ACK'ing join request from Cluster-Server-1, state member
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: finalize_join_for: join-12: ACK'ing join request from Cluster-Server-2, state member
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: handle_request: Raising I_JOIN_RESULT: join-12
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_cl_join_finalize_respond: Confirming join join-12: join_ack_nack
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce Target_iscsi3 after complete monitor op (interval=0)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce FS_nfs2 after complete monitor op (interval=0)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce ExportFS_nfs2 after complete monitor op (interval=0)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce FS_nfs1 after complete monitor op (interval=0)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_NFS_Server:1 after complete start op (interval=0)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_NFS_Server:1 after complete monitor op (interval=30000)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce LVM_drive after complete monitor op (interval=0)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce Lun_iscsi1 after complete monitor op (interval=0)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce Lun_iscsi2 after complete monitor op (interval=0)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_PingD:1 after complete monitor op (interval=0)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_PingD:1 after complete monitor op (interval=10000)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_Device_drive:1 after complete monitor op (interval=0)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_Device_drive:1 after complete monitor op (interval=20000)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce Target_iscsi1 after complete monitor op (interval=0)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce Target_iscsi2 after complete monitor op (interval=0)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_iSCSI_Daemon:1 after complete monitor op (interval=0)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce p_iSCSI_Daemon:1 after complete monitor op (interval=30000)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce ExportFS_nfs1 after complete monitor op (interval=0)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: create_operation_update: build_active_RAs: Updating resouce Lun_iscsi3 after complete monitor op (interval=0)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_cl_join_finalize_respond: join-12: Join complete.  Sending local LRM status to Cluster-Server-2
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_ack: Ignoring op=join_ack_nack message from Cluster-Server-2
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: info: do_dc_join_ack: join-12: Updating node state to member for Cluster-Server-2
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: info: erase_status_tag: Deleting xpath: //node_state[@uname='Cluster-Server-2']/lrm
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/358, version=0.21.13): ok (rc=0)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_ack: join-12: Registered callback for LRM update 361
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/359, version=0.21.14): ok (rc=0)
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-2']/lrm (/cib/status/node_state[2]/lrm)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-2']/lrm (origin=local/crmd/360, version=0.21.15): ok (rc=0)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: erase_xpath_callback: Deletion of "//node_state[@uname='Cluster-Server-2']/lrm": ok (rc=0)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: join_update_complete_callback: Join update 361 complete
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by join_update_complete_callback in state: S_FINALIZE_JOIN
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: check_join_state: join-12: Still waiting on 1 finalized nodes
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_CL_JOIN_RESULT
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_PROCESS_ACK
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: info: do_dc_join_ack: join-12: Updating node state to member for Cluster-Server-1
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: info: erase_status_tag: Deleting xpath: //node_state[@uname='Cluster-Server-1']/lrm
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_ack: join-12: Registered callback for LRM update 363
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='Cluster-Server-1']/lrm (/cib/status/node_state[1]/lrm)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='Cluster-Server-1']/lrm (origin=local/crmd/362, version=0.21.20): ok (rc=0)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: erase_xpath_callback: Deletion of "//node_state[@uname='Cluster-Server-1']/lrm": ok (rc=0)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: join_update_complete_callback: Join update 363 complete
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: check_join_state: Invoked by join_update_complete_callback in state: S_FINALIZE_JOIN
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: check_join_state: join-12 complete: join_update_complete_callback
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_FINALIZED: [ state=S_FINALIZE_JOIN cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-1: true (overwrite=true) hash_size=2
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: ghash_update_cib_node: Updating Cluster-Server-2: true (overwrite=true) hash_size=2
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: All 2 cluster nodes are eligible to run resources.
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_JOIN_FINAL
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_dc_join_final: Ensuring DC, quorum and node attributes are up-to-date
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: attrd_update_delegate: Sent update: (null)=(null) for localhost
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: crm_update_quorum: Updating quorum status to true (call=366)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_CANCEL
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_te_invoke: Cancelling the transition: inactive
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: info: abort_transition_graph: do_te_invoke:162 - Triggered transition abort (complete=1) : Peer Cancelled
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_PE_INVOKE
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke: Query 367: Requesting the current CIB: S_POLICY_ENGINE
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.21.21 -> 0.21.22 (S_POLICY_ENGINE)
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/364, version=0.21.22): ok (rc=0)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.21.22 -> 0.21.23 (S_POLICY_ENGINE)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.21.23 -> 0.21.24 (S_POLICY_ENGINE)
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/366, version=0.21.24): ok (rc=0)
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='master-p_Device_drive:0'] does not exist
Sep 10 15:27:49 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update -22 for master-p_Device_drive:0=(null) passed
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:27:49 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-pingd" name="pingd" value="100" />
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:27:49 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-probe_complete" name="probe_complete" value="true" />
Sep 10 15:27:49 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 202 for pingd=100 passed
Sep 10 15:27:49 Cluster-Server-2 attrd: [40195]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
Sep 10 15:27:49 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_Device_drive:1 (10000)
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: info: unpack_config: Startup probes: enabled
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH timeout: 60000
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: unpack_config: STONITH of failed nodes is disabled
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Stop all active resources: false
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: unpack_config: Default stickiness: 0
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: notice: unpack_config: On loss of CCM Quorum: Ignore
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: info: unpack_domains: Unpacking domains
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-1 is online
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: info: determine_online_status: Node Cluster-Server-2 is online
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: process_orphan_resource: Detected orphan resource ExportFS_nfs1 on Cluster-Server-1
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: create_fake_resource: Orphan resource <primitive id="ExportFS_nfs1" type="exportfs" class="ocf" provider="nas" />
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: process_orphan_resource: Detected orphan resource FS_nfs1 on Cluster-Server-1
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: create_fake_resource: Orphan resource <primitive id="FS_nfs1" type="Filesystem" class="ocf" provider="nas" />
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: LVM_drive_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource LVM_drive active on Cluster-Server-1
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: process_orphan_resource: Detected orphan resource Lun_iscsi1 on Cluster-Server-1
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: create_fake_resource: Orphan resource <primitive id="Lun_iscsi1" type="SCSTLun" class="ocf" provider="nas" />
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_Device_drive:0_last_failure_0 on Cluster-Server-1 returned 8 (master) instead of the expected value: 7 (not running)
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Operation monitor found resource p_Device_drive:0 active in master mode on Cluster-Server-1
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:0_last_failure_0 on Cluster-Server-1 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: process_orphan_resource: Detected orphan resource Target_iscsi3 on Cluster-Server-1
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: create_fake_resource: Orphan resource <primitive id="Target_iscsi3" type="SCSTTarget" class="ocf" provider="nas" />
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: process_orphan_resource: Detected orphan resource Target_iscsi1 on Cluster-Server-1
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: create_fake_resource: Orphan resource <primitive id="Target_iscsi1" type="SCSTTarget" class="ocf" provider="nas" />
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: process_orphan_resource: Detected orphan resource Lun_iscsi3 on Cluster-Server-1
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: create_fake_resource: Orphan resource <primitive id="Lun_iscsi3" type="SCSTLun" class="ocf" provider="nas" />
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: FS_nfs2_last_failure_0 on Cluster-Server-2 returned 5 (not installed) instead of the expected value: 7 (not running)
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Preventing FS_nfs2 from re-starting on Cluster-Server-2: operation monitor failed 'not installed' (rc=5)
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: ExportFS_nfs2_last_failure_0 on Cluster-Server-2 returned 5 (not installed) instead of the expected value: 7 (not running)
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Preventing ExportFS_nfs2 from re-starting on Cluster-Server-2: operation monitor failed 'not installed' (rc=5)
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: FS_nfs1_last_failure_0 on Cluster-Server-2 returned 5 (not installed) instead of the expected value: 7 (not running)
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Preventing FS_nfs1 from re-starting on Cluster-Server-2: operation monitor failed 'not installed' (rc=5)
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_PingD:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_Device_drive:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: p_iSCSI_Daemon:1_last_failure_0 on Cluster-Server-2 returned 0 (ok) instead of the expected value: 7 (not running)
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: info: unpack_rsc_op: Operation monitor found resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: unpack_rsc_op: ExportFS_nfs1_last_failure_0 on Cluster-Server-2 returned 5 (not installed) instead of the expected value: 7 (not running)
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: notice: unpack_rsc_op: Preventing ExportFS_nfs1 from re-starting on Cluster-Server-2: operation monitor failed 'not installed' (rc=5)
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-1 had value 100 for pingd
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: get_node_score: Rule Device_drive_on_Connected_Node-rule: node Cluster-Server-2 had value 100 for pingd
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: NFS_Server [p_NFS_Server]
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:0 active on Cluster-Server-1
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_NFS_Server:1 active on Cluster-Server-2
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: iSCSI_Daemon [p_iSCSI_Daemon]
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:0 active on Cluster-Server-1
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_iSCSI_Daemon:1 active on Cluster-Server-2
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: info: clone_print:  Clone Set: PingD [p_PingD]
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:0 active on Cluster-Server-1
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_PingD:1 active on Cluster-Server-2
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: info: short_print:      Started: [ Cluster-Server-1 Cluster-Server-2 ]
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: info: clone_print:  Master/Slave Set: Device_drive [p_Device_drive]
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:0 active on Cluster-Server-1
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:0 active on Cluster-Server-1
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_active: Resource p_Device_drive:1 active on Cluster-Server-2
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: info: short_print:      Masters: [ Cluster-Server-1 ]
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: info: short_print:      Slaves: [ Cluster-Server-2 ]
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: info: native_print: LVM_drive	(ocf::nas:LVM2):	Started Cluster-Server-1
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: iSCSI_iscsi2
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: info: native_print:      Target_iscsi2	(ocf::nas:SCSTTarget):	Started Cluster-Server-1
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: info: native_print:      Lun_iscsi2	(ocf::nas:SCSTLun):	Started Cluster-Server-1
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: info: group_print:  Resource Group: NFS_nfs2
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: info: native_print:      FS_nfs2	(ocf::nas:Filesystem):	Started Cluster-Server-1
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: info: native_print:      ExportFS_nfs2	(ocf::nas:exportfs):	Started Cluster-Server-1
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_prefer_Node-rule) is not active (role : Master)
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_rsc_location: Constraint (Device_drive_on_Connected_Node-rule) is not active (role : Master)
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:0: preferring current location (node=Cluster-Server-1, weight=1)
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: common_apply_stickiness: Resource p_PingD:1: preferring current location (node=Cluster-Server-2, weight=1)
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='master-p_Device_drive:1'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[3])
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_pe_invoke_callback: Invoking the PE: query=367, ref=pe_calc-dc-1347283669-220, seq=312, quorate=1
Sep 10 15:27:49 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-master-p_Device_drive.1" name="master-p_Device_drive:1" value="10000" />
Sep 10 15:27:49 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: pingd (100)
Sep 10 15:27:49 Cluster-Server-2 attrd: [40195]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.21.24 -> 0.21.25 (S_POLICY_ENGINE)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.21.25 -> 0.21.26 (S_POLICY_ENGINE)
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_NFS_Server:0
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_NFS_Server:1
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 NFS_Server instances of a possible 2
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_iSCSI_Daemon:0
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_iSCSI_Daemon:1
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 iSCSI_Daemon instances of a possible 2
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_PingD:0
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_PingD:1
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 PingD instances of a possible 2
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.21.26 -> 0.21.27 (S_POLICY_ENGINE)
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to p_Device_drive:0
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-2 to p_Device_drive:1
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: clone_color: Allocated 2 Device_drive instances of a possible 2
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:0 master score: 10150
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: info: master_color: Promoting p_Device_drive:0 (Master Cluster-Server-1)
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: master_color: p_Device_drive:1 master score: 10100
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: info: master_color: Device_drive: Promoted 1 instances of a possible 1 to master
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to LVM_drive
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Target_iscsi2
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to Lun_iscsi2
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to FS_nfs2
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Assigning Cluster-Server-1 to ExportFS_nfs2
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource ExportFS_nfs1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for ExportFS_nfs1
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource FS_nfs1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for FS_nfs1
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Lun_iscsi1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Lun_iscsi1
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Target_iscsi3 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Target_iscsi3
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Target_iscsi1 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Target_iscsi1
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: All nodes for resource Lun_iscsi3 are unavailable, unclean or shutting down (Cluster-Server-2: 1, -1000000)
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: native_assign_node: Could not allocate a node for Lun_iscsi3
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: debug: master_create_actions: Creating actions for Device_drive
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[1])
Sep 10 15:27:49 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-probe_complete" name="probe_complete" value="true" />
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:0	(Started Cluster-Server-1)
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_NFS_Server:1	(Started Cluster-Server-2)
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:0	(Started Cluster-Server-1)
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_iSCSI_Daemon:1	(Started Cluster-Server-2)
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:0	(Started Cluster-Server-1)
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_PingD:1	(Started Cluster-Server-2)
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_Device_drive:0	(Master Cluster-Server-1)
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   p_Device_drive:1	(Slave Cluster-Server-2)
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   LVM_drive	(Started Cluster-Server-1)
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Target_iscsi2	(Started Cluster-Server-1)
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   Lun_iscsi2	(Started Cluster-Server-1)
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   FS_nfs2	(Started Cluster-Server-1)
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: info: LogActions: Leave   ExportFS_nfs2	(Started Cluster-Server-1)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_PE_SUCCESS from handle_response() received in state S_POLICY_ENGINE
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log <create_reply_adv origin="process_pe_message" t="crmd" version="3.0.6" subt="response" reference="pe_calc-dc-1347283669-220" crm_task="pe_calc" crm_sys_to="dc" crm_sys_from="pengine" crm-tgraph-in="/var/lib/pengine/pe-input-27.bz2" graph-errors="0" graph-warnings="0" config-errors="0" config-warnings="0" >
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   <crm_xml >
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log     <transition_graph cluster-delay="60s" stonith-timeout="60s" failed-stop-offset="INFINITY" failed-start-offset="INFINITY" batch-limit="30" transition_id="27" />
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log   </crm_xml>
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_log: do_log </create_reply_adv>
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_TE_INVOKE
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: unpack_graph: Unpacked transition 27: 0 actions in 0 synapses
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: info: do_te_invoke: Processing graph 27 (ref=pe_calc-dc-1347283669-220) derived from /var/lib/pengine/pe-input-27.bz2
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: print_graph: ## Empty transition graph ##
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: notice: run_graph: ==== Transition 27 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-27.bz2): Complete
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: print_graph: ## Empty transition graph ##
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: te_graph_trigger: Transition 27 is now complete
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: notify_crmd: Transition 27 status: done - <null>
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: s_crmd_fsa: Processing I_TE_SUCCESS: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_LOG   
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_log: FSA: Input I_TE_SUCCESS from notify_crmd() received in state S_TRANSITION_ENGINE
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_state_transition: Starting PEngine Recheck Timer
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: crm_timer_start: Started PEngine Recheck Timer (I_PE_CALC:900000ms), src=415
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_DC_TIMER_STOP
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_INTEGRATE_TIMER_STOP
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: do_fsa_action: actions:trace: 	// A_FINALIZE_TIMER_STOP
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.21.27 -> 0.21.28 (S_IDLE)
Sep 10 15:27:49 Cluster-Server-2 cib: [40192]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='Cluster-Server-2']//transient_attributes//nvpair[@name='pingd'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[2])
Sep 10 15:27:49 Cluster-Server-2 attrd: [40195]: debug: find_nvpair_attr_delegate: Match <nvpair id="status-Cluster-Server-2-pingd" name="pingd" value="100" />
Sep 10 15:27:49 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 204 for probe_complete=true passed
Sep 10 15:27:49 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 206 for master-p_Device_drive:1=10000 passed
Sep 10 15:27:49 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 208 for probe_complete=true passed
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.21.28 -> 0.21.29 (S_IDLE)
Sep 10 15:27:49 Cluster-Server-2 attrd: [40195]: debug: attrd_cib_callback: Update 210 for pingd=100 passed
Sep 10 15:27:49 Cluster-Server-2 pengine: [40196]: notice: process_pe_message: Transition 27: PEngine Input stored in: /var/lib/pengine/pe-input-27.bz2
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.21.29 -> 0.21.30 (S_IDLE)
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify
Sep 10 15:27:49 Cluster-Server-2 crmd: [40197]: debug: te_update_diff: Processing diff (cib_modify): 0.21.30 -> 0.21.31 (S_IDLE)
Sep 10 15:27:55 Cluster-Server-1 lrmd: [48712]: debug: rsc:Target_iscsi2 monitor[21] (pid 61980)
Sep 10 15:27:55 Cluster-Server-1 lrmd: [48712]: debug: rsc:Lun_iscsi2 monitor[23] (pid 61981)
SCSTTarget(Target_iscsi2)[61980]:	2012/09/10_15:27:55 DEBUG: Target_iscsi2 monitor : 0
Sep 10 15:27:55 Cluster-Server-1 lrmd: [48712]: info: operation monitor[21] on Target_iscsi2 for client 48715: pid 61980 exited with return code 0
SCSTLun(Lun_iscsi2)[61981]:	2012/09/10_15:27:55 INFO: Lun_iscsi2 monitor : 0
SCSTLun(Lun_iscsi2)[61981]:	2012/09/10_15:27:55 INFO: Lun_iscsi2 monitor : 0
Sep 10 15:27:55 Cluster-Server-1 lrmd: [48712]: info: operation monitor[23] on Lun_iscsi2 for client 48715: pid 61981 exited with return code 0
Sep 10 15:27:58 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_PingD:0 monitor[6] (pid 62147)
Sep 10 15:27:58 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_PingD:1 monitor[6] (pid 8742)
Sep 10 15:27:59 Cluster-Server-1 lrmd: [48712]: debug: rsc:ExportFS_nfs2 monitor[45] (pid 62163)
exportfs(ExportFS_nfs2)[62163]:	2012/09/10_15:27:59 INFO: Directory /volumes/nfs2 is exported to * (started).
exportfs(ExportFS_nfs2)[62163]:	2012/09/10_15:27:59 INFO: Directory /volumes/nfs2 is exported to * (started).
Sep 10 15:27:59 Cluster-Server-1 lrmd: [48712]: info: operation monitor[45] on ExportFS_nfs2 for client 48715: pid 62163 exited with return code 0
Sep 10 15:27:59 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_Device_drive:0 monitor[11] (pid 62174)
drbd(p_Device_drive:0)[62174]:	2012/09/10_15:27:59 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:27:59 Cluster-Server-1 crm_attribute: [62204]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:27:59 Cluster-Server-1 crm_attribute: [62204]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:27:59 Cluster-Server-1 crm_attribute: [62204]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:27:59 Cluster-Server-1 crm_attribute: [62204]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:27:59 Cluster-Server-1 crm_attribute: [62204]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:27:59 Cluster-Server-1 crm_attribute: [62204]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:27:59 Cluster-Server-1 crm_attribute: [62204]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:27:59 Cluster-Server-1 crm_attribute: [62204]: info: determine_host: Mapped Cluster-Server-1 to Cluster-Server-1
Sep 10 15:27:59 Cluster-Server-1 crm_attribute: [62204]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:27:59 Cluster-Server-1 crm_attribute: [62204]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:27:59 Cluster-Server-1 crm_attribute: [62204]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:27:59 Cluster-Server-1 crm_attribute: [62204]: info: attrd_lazy_update: Updated master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:27:59 Cluster-Server-1 crm_attribute: [62204]: info: main: Update master-p_Device_drive:0=10000 sent via attrd
Sep 10 15:27:59 Cluster-Server-1 crm_attribute: [62204]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:27:59 Cluster-Server-1 crm_attribute: [62204]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:27:59 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:0=10000
Sep 10 15:27:59 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:0)[62174]:	2012/09/10_15:27:59 DEBUG: drive: Exit code 0
drbd(p_Device_drive:0)[62174]:	2012/09/10_15:27:59 DEBUG: drive: Command output: 
Sep 10 15:27:59 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_Device_drive:0:monitor:stdout) 

Sep 10 15:27:59 Cluster-Server-1 lrmd: [48712]: info: operation monitor[11] on p_Device_drive:0 for client 48715: pid 62174 exited with return code 8
Sep 10 15:28:00 Cluster-Server-1 attrd_updater: [62213]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:28:00 Cluster-Server-1 attrd_updater: [62213]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:28:00 Cluster-Server-1 attrd_updater: [62213]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:28:00 Cluster-Server-1 attrd_updater: [62213]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:28:00 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:28:00 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:28:00 Cluster-Server-1 lrmd: [48712]: info: operation monitor[6] on p_PingD:0 for client 48715: pid 62147 exited with return code 0
Sep 10 15:28:00 Cluster-Server-2 attrd_updater: [8803]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:28:00 Cluster-Server-2 attrd_updater: [8803]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:28:00 Cluster-Server-2 attrd_updater: [8803]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:28:00 Cluster-Server-2 attrd_updater: [8803]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:28:00 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:28:00 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:28:00 Cluster-Server-2 lrmd: [40194]: info: operation monitor[6] on p_PingD:1 for client 40197: pid 8742 exited with return code 0
Sep 10 15:28:05 Cluster-Server-1 lrmd: [48712]: debug: rsc:Target_iscsi2 monitor[21] (pid 62215)
Sep 10 15:28:05 Cluster-Server-1 lrmd: [48712]: debug: rsc:Lun_iscsi2 monitor[23] (pid 62216)
SCSTTarget(Target_iscsi2)[62215]:	2012/09/10_15:28:05 DEBUG: Target_iscsi2 monitor : 0
Sep 10 15:28:05 Cluster-Server-1 lrmd: [48712]: info: operation monitor[21] on Target_iscsi2 for client 48715: pid 62215 exited with return code 0
SCSTLun(Lun_iscsi2)[62216]:	2012/09/10_15:28:05 INFO: Lun_iscsi2 monitor : 0
SCSTLun(Lun_iscsi2)[62216]:	2012/09/10_15:28:05 INFO: Lun_iscsi2 monitor : 0
Sep 10 15:28:05 Cluster-Server-1 lrmd: [48712]: info: operation monitor[23] on Lun_iscsi2 for client 48715: pid 62216 exited with return code 0
Sep 10 15:28:08 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_Device_drive:1 monitor[11] (pid 9473)
drbd(p_Device_drive:1)[9473]:	2012/09/10_15:28:08 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:28:08 Cluster-Server-2 crm_attribute: [9503]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:28:08 Cluster-Server-2 crm_attribute: [9503]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:28:08 Cluster-Server-2 crm_attribute: [9503]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:28:08 Cluster-Server-2 crm_attribute: [9503]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:28:08 Cluster-Server-2 crm_attribute: [9503]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:28:08 Cluster-Server-2 crm_attribute: [9503]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:28:08 Cluster-Server-2 crm_attribute: [9503]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:28:08 Cluster-Server-2 crm_attribute: [9503]: info: determine_host: Mapped Cluster-Server-2 to Cluster-Server-2
Sep 10 15:28:08 Cluster-Server-2 crm_attribute: [9503]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:28:08 Cluster-Server-2 crm_attribute: [9503]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:28:08 Cluster-Server-2 crm_attribute: [9503]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:1=10000 for Cluster-Server-2
Sep 10 15:28:08 Cluster-Server-2 crm_attribute: [9503]: info: attrd_lazy_update: Updated master-p_Device_drive:1=10000 for Cluster-Server-2
Sep 10 15:28:08 Cluster-Server-2 crm_attribute: [9503]: info: main: Update master-p_Device_drive:1=10000 sent via attrd
Sep 10 15:28:08 Cluster-Server-2 crm_attribute: [9503]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:28:08 Cluster-Server-2 crm_attribute: [9503]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:28:08 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:1=10000
Sep 10 15:28:08 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:1)[9473]:	2012/09/10_15:28:08 DEBUG: drive: Exit code 0
drbd(p_Device_drive:1)[9473]:	2012/09/10_15:28:08 DEBUG: drive: Command output: 
Sep 10 15:28:08 Cluster-Server-2 lrmd: [40194]: debug: RA output: (p_Device_drive:1:monitor:stdout) 

Sep 10 15:28:08 Cluster-Server-2 lrmd: [40194]: info: operation monitor[11] on p_Device_drive:1 for client 40197: pid 9473 exited with return code 0
Sep 10 15:28:09 Cluster-Server-1 lrmd: [48712]: debug: rsc:FS_nfs2 monitor[43] (pid 62278)
Sep 10 15:28:09 Cluster-Server-1 lrmd: [48712]: info: operation monitor[43] on FS_nfs2 for client 48715: pid 62278 exited with return code 0
Sep 10 15:28:09 Cluster-Server-1 lrmd: [48712]: debug: rsc:ExportFS_nfs2 monitor[45] (pid 62344)
exportfs(ExportFS_nfs2)[62344]:	2012/09/10_15:28:09 INFO: Directory /volumes/nfs2 is exported to * (started).
exportfs(ExportFS_nfs2)[62344]:	2012/09/10_15:28:09 INFO: Directory /volumes/nfs2 is exported to * (started).
Sep 10 15:28:09 Cluster-Server-1 lrmd: [48712]: info: operation monitor[45] on ExportFS_nfs2 for client 48715: pid 62344 exited with return code 0
Sep 10 15:28:09 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_Device_drive:0 monitor[11] (pid 62371)
drbd(p_Device_drive:0)[62371]:	2012/09/10_15:28:09 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:28:09 Cluster-Server-1 crm_attribute: [62401]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:28:09 Cluster-Server-1 crm_attribute: [62401]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:28:09 Cluster-Server-1 crm_attribute: [62401]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:28:09 Cluster-Server-1 crm_attribute: [62401]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:28:09 Cluster-Server-1 crm_attribute: [62401]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:28:09 Cluster-Server-1 crm_attribute: [62401]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:28:09 Cluster-Server-1 crm_attribute: [62401]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:28:09 Cluster-Server-1 crm_attribute: [62401]: info: determine_host: Mapped Cluster-Server-1 to Cluster-Server-1
Sep 10 15:28:09 Cluster-Server-1 crm_attribute: [62401]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:28:09 Cluster-Server-1 crm_attribute: [62401]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:28:09 Cluster-Server-1 crm_attribute: [62401]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:28:09 Cluster-Server-1 crm_attribute: [62401]: info: attrd_lazy_update: Updated master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:28:09 Cluster-Server-1 crm_attribute: [62401]: info: main: Update master-p_Device_drive:0=10000 sent via attrd
Sep 10 15:28:09 Cluster-Server-1 crm_attribute: [62401]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:28:09 Cluster-Server-1 crm_attribute: [62401]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:28:09 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:0=10000
Sep 10 15:28:09 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:0)[62371]:	2012/09/10_15:28:09 DEBUG: drive: Exit code 0
drbd(p_Device_drive:0)[62371]:	2012/09/10_15:28:09 DEBUG: drive: Command output: 
Sep 10 15:28:09 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_Device_drive:0:monitor:stdout) 

Sep 10 15:28:09 Cluster-Server-1 lrmd: [48712]: info: operation monitor[11] on p_Device_drive:0 for client 48715: pid 62371 exited with return code 8
Sep 10 15:28:09 Cluster-Server-2 crmd: [40197]: info: handle_request: Current ping state: S_IDLE
Sep 10 15:28:10 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_PingD:0 monitor[6] (pid 62494)
Sep 10 15:28:10 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_PingD:1 monitor[6] (pid 9646)
Sep 10 15:28:10 Cluster-Server-2 crmd: [40197]: info: handle_request: Current ping state: S_IDLE
Sep 10 15:28:10 Cluster-Server-2 crmd: [40197]: info: handle_request: Current ping state: S_IDLE
Sep 10 15:28:12 Cluster-Server-1 attrd_updater: [62806]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:28:12 Cluster-Server-1 attrd_updater: [62806]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:28:12 Cluster-Server-1 attrd_updater: [62806]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:28:12 Cluster-Server-1 attrd_updater: [62806]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:28:12 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:28:12 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:28:12 Cluster-Server-1 lrmd: [48712]: info: operation monitor[6] on p_PingD:0 for client 48715: pid 62494 exited with return code 0
Sep 10 15:28:12 Cluster-Server-2 attrd_updater: [10139]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:28:12 Cluster-Server-2 attrd_updater: [10139]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:28:12 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:28:12 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:28:12 Cluster-Server-2 attrd_updater: [10139]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:28:12 Cluster-Server-2 attrd_updater: [10139]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:28:12 Cluster-Server-2 lrmd: [40194]: info: operation monitor[6] on p_PingD:1 for client 40197: pid 9646 exited with return code 0
Sep 10 15:28:12 Cluster-Server-2 crmd: [40197]: info: handle_request: Current ping state: S_IDLE
Sep 10 15:28:12 Cluster-Server-2 crmd: [40197]: info: handle_request: Current ping state: S_IDLE
Sep 10 15:28:15 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_NFS_Server:0 monitor[8] (pid 62873)
Sep 10 15:28:15 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_iSCSI_Daemon:0 monitor[5] (pid 62874)
Sep 10 15:28:15 Cluster-Server-1 lrmd: [48712]: info: operation monitor[8] on p_NFS_Server:0 for client 48715: pid 62873 exited with return code 0
Sep 10 15:28:15 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_NFS_Server:0:monitor:stdout) nfsd running

Sep 10 15:28:15 Cluster-Server-1 lrmd: [48712]: info: operation monitor[5] on p_iSCSI_Daemon:0 for client 48715: pid 62874 exited with return code 0
Sep 10 15:28:15 Cluster-Server-1 lrmd: [48712]: debug: rsc:Target_iscsi2 monitor[21] (pid 62879)
Sep 10 15:28:15 Cluster-Server-1 lrmd: [48712]: debug: rsc:Lun_iscsi2 monitor[23] (pid 62880)
SCSTLun(Lun_iscsi2)[62880]:	2012/09/10_15:28:16 INFO: Lun_iscsi2 monitor : 0
SCSTLun(Lun_iscsi2)[62880]:	2012/09/10_15:28:16 INFO: Lun_iscsi2 monitor : 0
Sep 10 15:28:15 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_NFS_Server:1 monitor[8] (pid 10788)
Sep 10 15:28:15 Cluster-Server-2 lrmd: [40194]: debug: RA output: (p_NFS_Server:1:monitor:stdout) nfsd running

Sep 10 15:28:15 Cluster-Server-2 lrmd: [40194]: info: operation monitor[8] on p_NFS_Server:1 for client 40197: pid 10788 exited with return code 0
Sep 10 15:28:15 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_iSCSI_Daemon:1 monitor[5] (pid 10790)
Sep 10 15:28:15 Cluster-Server-2 lrmd: [40194]: info: operation monitor[5] on p_iSCSI_Daemon:1 for client 40197: pid 10790 exited with return code 0
Sep 10 15:28:16 Cluster-Server-1 lrmd: [48712]: info: operation monitor[23] on Lun_iscsi2 for client 48715: pid 62880 exited with return code 0
SCSTTarget(Target_iscsi2)[62879]:	2012/09/10_15:28:16 DEBUG: Target_iscsi2 monitor : 0
Sep 10 15:28:16 Cluster-Server-1 lrmd: [48712]: info: operation monitor[21] on Target_iscsi2 for client 48715: pid 62879 exited with return code 0
Sep 10 15:28:19 Cluster-Server-1 lrmd: [48712]: debug: rsc:ExportFS_nfs2 monitor[45] (pid 62953)
exportfs(ExportFS_nfs2)[62953]:	2012/09/10_15:28:19 INFO: Directory /volumes/nfs2 is exported to * (started).
exportfs(ExportFS_nfs2)[62953]:	2012/09/10_15:28:19 INFO: Directory /volumes/nfs2 is exported to * (started).
Sep 10 15:28:19 Cluster-Server-1 lrmd: [48712]: info: operation monitor[45] on ExportFS_nfs2 for client 48715: pid 62953 exited with return code 0
Sep 10 15:28:19 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_Device_drive:0 monitor[11] (pid 62965)
drbd(p_Device_drive:0)[62965]:	2012/09/10_15:28:20 DEBUG: drive: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
Sep 10 15:28:20 Cluster-Server-1 crm_attribute: [62995]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
Sep 10 15:28:20 Cluster-Server-1 crm_attribute: [62995]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
Sep 10 15:28:20 Cluster-Server-1 crm_attribute: [62995]: debug: cib_native_signon_raw: Connection to CIB successful
Sep 10 15:28:20 Cluster-Server-1 crm_attribute: [62995]: debug: query_node_uuid: Result section <nodes >
Sep 10 15:28:20 Cluster-Server-1 crm_attribute: [62995]: debug: query_node_uuid: Result section   <node id="Cluster-Server-1" type="normal" uname="Cluster-Server-1" />
Sep 10 15:28:20 Cluster-Server-1 crm_attribute: [62995]: debug: query_node_uuid: Result section   <node id="Cluster-Server-2" type="normal" uname="Cluster-Server-2" />
Sep 10 15:28:20 Cluster-Server-1 crm_attribute: [62995]: debug: query_node_uuid: Result section </nodes>
Sep 10 15:28:20 Cluster-Server-1 crm_attribute: [62995]: info: determine_host: Mapped Cluster-Server-1 to Cluster-Server-1
Sep 10 15:28:20 Cluster-Server-1 crm_attribute: [62995]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:28:20 Cluster-Server-1 crm_attribute: [62995]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:28:20 Cluster-Server-1 crm_attribute: [62995]: debug: attrd_update_delegate: Sent update: master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:28:20 Cluster-Server-1 crm_attribute: [62995]: info: attrd_lazy_update: Updated master-p_Device_drive:0=10000 for Cluster-Server-1
Sep 10 15:28:20 Cluster-Server-1 crm_attribute: [62995]: info: main: Update master-p_Device_drive:0=10000 sent via attrd
Sep 10 15:28:20 Cluster-Server-1 crm_attribute: [62995]: debug: cib_native_signoff: Signing out of the CIB Service
Sep 10 15:28:20 Cluster-Server-1 crm_attribute: [62995]: info: crm_xml_cleanup: Cleaning up memory from libxml2
Sep 10 15:28:20 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: update message from crm_attribute: master-p_Device_drive:0=10000
Sep 10 15:28:20 Cluster-Server-1 attrd: [48713]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
drbd(p_Device_drive:0)[62965]:	2012/09/10_15:28:20 DEBUG: drive: Exit code 0
drbd(p_Device_drive:0)[62965]:	2012/09/10_15:28:20 DEBUG: drive: Command output: 
Sep 10 15:28:20 Cluster-Server-1 lrmd: [48712]: debug: RA output: (p_Device_drive:0:monitor:stdout) 

Sep 10 15:28:20 Cluster-Server-1 lrmd: [48712]: info: operation monitor[11] on p_Device_drive:0 for client 48715: pid 62965 exited with return code 8
Sep 10 15:28:22 Cluster-Server-1 lrmd: [48712]: debug: rsc:p_PingD:0 monitor[6] (pid 63059)
Sep 10 15:28:22 Cluster-Server-2 crmd: [40197]: info: handle_request: Current ping state: S_IDLE
Sep 10 15:28:22 Cluster-Server-2 lrmd: [40194]: debug: rsc:p_PingD:1 monitor[6] (pid 11428)
Sep 10 15:28:23 Cluster-Server-2 crmd: [40197]: info: handle_request: Current ping state: S_IDLE
Sep 10 15:28:24 Cluster-Server-2 crmd: [40197]: info: handle_request: Current ping state: S_IDLE
Sep 10 15:28:24 Cluster-Server-2 attrd_updater: [11521]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Sep 10 15:28:24 Cluster-Server-2 attrd_updater: [11521]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd
Sep 10 15:28:24 Cluster-Server-2 attrd_updater: [11521]: debug: attrd_update_delegate: Sent update: pingd=100 for localhost
Sep 10 15:28:24 Cluster-Server-2 attrd_updater: [11521]: info: attrd_lazy_update: Updated pingd=100 for (null)
Sep 10 15:28:24 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: update message from attrd_updater: pingd=100
Sep 10 15:28:24 Cluster-Server-2 attrd: [40195]: debug: attrd_local_callback: Supplied: 100, Current: 100, Stored: 100
Sep 10 15:28:24 Cluster-Server-2 lrmd: [40194]: info: operation monitor[6] on p_PingD:1 for client 40197: pid 11428 exited with return code 0
