<html><head><style type='text/css'>p { margin: 0; }</style></head><body><div style='font-family: Times New Roman; font-size: 12pt; color: #000000'><font size="3">Hi Dejan,</font><div style="color: rgb(0, 0, 0); font-family: 'Times New Roman'; font-size: 12pt; "><br></div><div style="color: rgb(0, 0, 0); font-family: 'Times New Roman'; font-size: 12pt; ">Thanks for the tip, I have now tried ocf-tester with the DRBD resource on each node. In both cases it reports that there is nothing wrong with the DRBD resource's configuration:</div><div id=""><div id=""><font face="'courier new', courier, monaco, monospace, sans-serif" size="2"># ocf-tester -n ms_drbd_mount1 -o drbd_resource="mount1" /usr/lib/ocf/resource.d/heartbeat/drbd</font></div><div id=""><font face="'courier new', courier, monaco, monospace, sans-serif" size="2">Beginning tests for /usr/lib/ocf/resource.d/heartbeat/drbd...</font></div><div id=""><font face="'courier new', courier, monaco, monospace, sans-serif" size="2">su: Permission denied</font></div><div id=""><font face="'courier new', courier, monaco, monospace, sans-serif" size="2">(Ignored)</font></div><div id=""><font face="'courier new', courier, monaco, monospace, sans-serif" size="2">/usr/lib/ocf/resource.d/heartbeat/drbd passed all tests</font></div><div id=""><br></div><div id="">At this point the DRBD service is in slave mode on both nodes:</div><div id=""><div id=""><font face="'courier new', courier, monaco, monospace, sans-serif" size="2">Online: [ node1 node2 ]</font></div><div id=""><font face="'courier new', courier, monaco, monospace, sans-serif" size="2"><br></font></div><div id=""><font face="'courier new', courier, monaco, monospace, sans-serif" size="2"> Master/Slave Set: ms_drbd_mount1 [p_drbd_mount1]</font></div><div id=""><font face="'courier new', courier, monaco, monospace, sans-serif" size="2"> Slaves: [ node2 node1 ]</font></div><div id=""><font face="'courier new', courier, monaco, monospace, sans-serif" size="2"> Master/Slave Set: ms_drbd_mount2 [p_drbd_mount2]</font></div><div id=""><font face="'courier new', courier, monaco, monospace, sans-serif" size="2"> Slaves: [ node2 node1 ]</font></div><div id=""><br></div><div id="">Attached is the output from /var/log/syslog if I attempt to start/promote/cleanup ms_drbd_mount1 or if I attempt to start/restart/cleanup p_drbd_mount1. The configuration of these resources is as follows:</div></div><div id=""><font face="'courier new', courier, monaco, monospace, sans-serif" size="2">crm(live)configure# show ms_drbd_mount1</font></div><div id="aeaoofnhgocdbnbeljkmbjdmhbcokfdb-mousedown"><font face="'courier new', courier, monaco, monospace, sans-serif" size="2">ms ms_drbd_mount1 p_drbd_mount1 \</font></div><div id="aeaoofnhgocdbnbeljkmbjdmhbcokfdb-mousedown"><font face="'courier new', courier, monaco, monospace, sans-serif" size="2"><span class="Apple-tab-span" style="white-space:pre"> </span>meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" target-role="Started" is-managed="true"</font></div><div id="aeaoofnhgocdbnbeljkmbjdmhbcokfdb-mousedown"><font face="'courier new', courier, monaco, monospace, sans-serif" size="2">crm(live)configure# show p_drbd_mount1</font></div><div id="aeaoofnhgocdbnbeljkmbjdmhbcokfdb-mousedown"><div id="aeaoofnhgocdbnbeljkmbjdmhbcokfdb-mousedown"><font face="'courier new', courier, monaco, monospace, sans-serif" size="2">primitive p_drbd_tools ocf:linbit:drbd \</font></div><div id="aeaoofnhgocdbnbeljkmbjdmhbcokfdb-mousedown"><font face="'courier new', courier, monaco, monospace, sans-serif" size="2"><span class="Apple-tab-span" style="white-space:pre"> </span>params drbd_resource="mount1" \</font></div><div id="aeaoofnhgocdbnbeljkmbjdmhbcokfdb-mousedown"><font face="'courier new', courier, monaco, monospace, sans-serif" size="2"><span class="Apple-tab-span" style="white-space:pre"> </span>op start interval="0" timeout="240" \</font></div><div id="aeaoofnhgocdbnbeljkmbjdmhbcokfdb-mousedown"><font face="'courier new', courier, monaco, monospace, sans-serif" size="2"><span class="Apple-tab-span" style="white-space:pre"> </span>op stop interval="0" timeout="100" \</font></div><div id="aeaoofnhgocdbnbeljkmbjdmhbcokfdb-mousedown"><font face="'courier new', courier, monaco, monospace, sans-serif" size="2"><span class="Apple-tab-span" style="white-space:pre"> </span>op monitor interval="15" role="Master" \</font></div><div id="aeaoofnhgocdbnbeljkmbjdmhbcokfdb-mousedown"><font face="'courier new', courier, monaco, monospace, sans-serif" size="2"><span class="Apple-tab-span" style="white-space:pre"> </span>op monitor interval="30" role="Slave"</font></div></div><div id="aeaoofnhgocdbnbeljkmbjdmhbcokfdb-mousedown"><br></div><div id="aeaoofnhgocdbnbeljkmbjdmhbcokfdb-mousedown">I tried reverting to an earlier version of my pacemaker config (cibadmin --replace --xml-file=old.xml) and this problem disappears. What else can I try to debug this?</div><div id="aeaoofnhgocdbnbeljkmbjdmhbcokfdb-mousedown"><br></div><div id="aeaoofnhgocdbnbeljkmbjdmhbcokfdb-mousedown">Thanks,</div><div id="aeaoofnhgocdbnbeljkmbjdmhbcokfdb-mousedown"><br></div><div id="aeaoofnhgocdbnbeljkmbjdmhbcokfdb-mousedown">Andrew</div><div id="aeaoofnhgocdbnbeljkmbjdmhbcokfdb-mousedown"><br></div><hr id="zwchr" style="color: rgb(0, 0, 0); font-family: 'Times New Roman'; font-size: 12pt; "><div style="color: rgb(0, 0, 0); font-weight: normal; font-style: normal; text-decoration: none; font-family: Helvetica, Arial, sans-serif; font-size: 12pt; "><b>From: </b>"Dejan Muhamedagic" <dejanmm@fastmail.fm><br><b>To: </b>"The Pacemaker cluster resource manager" <pacemaker@oss.clusterlabs.org><br><b>Sent: </b>Thursday, January 26, 2012 7:01:26 AM<br><b>Subject: </b>Re: [Pacemaker] Nodes will not promote DRBD resources to master on failover<br><br>Hi,<br><br>On Wed, Jan 25, 2012 at 01:58:44PM -0600, Andrew Martin wrote:<br>> Hello,<br>> <br>> <br>> Recently I finished configuring a two-node cluster with pacemaker 1.1.6 and heartbeat 3.0.5 on nodes running Ubuntu 10.04. This cluster includes the following resources:<br>> - primitives for DRBD storage devices<br>> - primitives for mounting the filesystem on the DRBD storage<br>> - primitives for some mount binds<br>> - primitive for starting apache<br>> - primitives for starting samba and nfs servers (following instructions here )<br>> - primitives for exporting nfs shares (ocf:heartbeat:exportfs)<br>> <br>> <br>> Perhaps this is best described through the output of crm_mon:<br>> <br>> Online: [ node1 node2 ]<br>> <br>> <br>> Master/Slave Set: ms_drbd_mount1 [p_drbd_mount1] (unmanaged)<br>> p_drbd_mount1:0 (ocf::linbit:drbd): Started node2 (unmanaged)<br>> p_drbd_mount1:1 (ocf::linbit:drbd): Started node1 (unmanaged) FAILED<br>> Master/Slave Set: ms_drbd_mount2 [p_drbd_mount2]<br>> p_drbd_mount2:0 (ocf::linbit:drbd): Master node1 (unmanaged) FAILED<br>> Slaves: [ node2 ]<br>> Resource Group: g_core<br>> p_fs_mount1 (ocf::heartbeat:Filesystem): Started node1<br>> p_fs_mount2 (ocf::heartbeat:Filesystem): Started node1<br>> p_ip_nfs (ocf::heartbeat:IPaddr2): Started node1<br>> Resource Group: g_apache<br>> p_fs_mountbind1 (ocf::heartbeat:Filesystem): Started node1<br>> p_fs_mountbind2 (ocf::heartbeat:Filesystem): Started node1<br>> p_fs_mountbind3 (ocf::heartbeat:Filesystem): Started node1<br>> p_fs_varwww (ocf::heartbeat:Filesystem): Started node1<br>> p_apache (ocf::heartbeat:apache): Started node1<br>> Resource Group: g_fileservers<br>> p_lsb_smb (lsb:smbd): Started node1<br>> p_lsb_nmb (lsb:nmbd): Started node1<br>> p_lsb_nfsserver (lsb:nfs-kernel-server): Started node1<br>> p_exportfs_mount1 (ocf::heartbeat:exportfs): Started node1<br>> p_exportfs_mount2 (ocf::heartbeat:exportfs): Started node1<br>> <br>> <br>> I have read through the Pacemaker Explained documentation, however could not find a way to further debug these problems. First, I put node1 into standby mode to attempt failover to the other node (node2). Node2 appeared to start the transition to master, however it failed to promote the DRBD resources to master (the first step). I have attached a copy of this session in commands.log and additional excerpts from /var/log/syslog during important steps. I have attempted everything I can think of to try and start the DRBD resource (e.g. start/stop/promote/manage/cleanup under crm resource , restarting heartbeat) but cannot bring it out of the slave state. However, if I set it to unmanaged and then run drbdadm primary all in the terminal, pacemaker is satisfied and continues starting the rest of the resources. It then failed when attempting to mount the filesystem for mount2, the p_fs_mount2 resource. I attempted to mount the filesystem myself and was successful. I then unmount!<br> ed it and ran cleanup on p_fs_mount2 and then it mounted. The rest of the resources started as expected until the p_exportfs_mount2 resource, which failed as follows:<br>> p_exportfs_mount2 (ocf::heartbeat:exportfs): started node2 (unmanaged) FAILED<br>> <br>> <br>> I ran cleanup on this and it started, however when running this test earlier today no command could successfully start this exportfs resource.<br>> <br>> <br>> How can I configure pacemaker to better resolve these problems and be able to bring the node up successfully on its own? What can I check to determine why these failures are occuring? /var/log/syslog did not seem to contain very much useful information regarding why the failures occurred.<br><br>All of the issues you encountered seemed to be specific to<br>resources you want to run. Best to stop the cluster and use<br>ocf-tester to see where could be a problem. If the problems are<br>intermittent, then you'll need to dig through the logs---look for<br>messages coming from lrmd and the RA itself.<br><br>Thanks,<br><br>Dejan<br><br><br>> <br>> Thanks,<br>> <br>> <br>> Andrew<br>> <br>> <br>> <br><br>> Node node1 (d6fbcffe-ff23-4031-b108-428eabe19b7b): standby<br>> Online: [ node2 ]<br>> <br>> Master/Slave Set: ms_drbd_mount1 [p_drbd_mount1]<br>> Slaves: [ node2 ]<br>> Stopped: [ p_drbd_mount1:1 ]<br>> Master/Slave Set: ms_drbd_mount2 [p_drbd_mount2]<br>> Slaves: [ node2 ]<br>> Stopped: [ p_drbd_mount2:1 ]<br>> <br>> Failed actions:<br>> p_fs_mount2_start_0 (node=node2, call=78, rc=1, status=complete): unknown error<br>> p_drbd_mount2:1_monitor_30000 (node=node2, call=86, rc=7, status=complete): not running<br>> <br>> <br>> # mount2 resource cleanup ms_drbd_mount1<br>> ---> see cleanup-ms_drbd_mount1.log<br>> <br>> # mount2 resource unmanage p_drbd_mount1<br>> # mount2_mon -1 <br>> Node node1 (d6fbcffe-ff23-4031-b108-428eabe19b7b): standby<br>> Online: [ node2 ]<br>> <br>> Master/Slave Set: ms_drbd_mount1 [p_drbd_mount1] (unmanaged)<br>> p_drbd_mount1:0 (ocf::linbit:drbd): Master node2 (unmanaged)<br>> Stopped: [ p_drbd_mount1:1 ]<br>> Master/Slave Set: ms_drbd_mount2 [p_drbd_mount2]<br>> Slaves: [ node2 ]<br>> Stopped: [ p_drbd_mount2:1 ]<br>> <br>> Failed actions:<br>> p_fs_mount2_start_0 (node=node2, call=78, rc=1, status=complete): unknown error<br>> p_drbd_mount2:1_monitor_30000 (node=node2, call=86, rc=7, status=complete): not running<br>> <br>> <br>> # mount2 resource unmanage p_drbd_mount2<br>> # drbdadm primary all<br>> # mount2_mon -1<br>> Node node1 (d6fbcffe-ff23-4031-b108-428eabe19b7b): standby<br>> Online: [ node2 ]<br>> <br>> Master/Slave Set: ms_drbd_mount1 [p_drbd_mount1] (unmanaged)<br>> p_drbd_mount1:0 (ocf::linbit:drbd): Master node2 (unmanaged)<br>> Stopped: [ p_drbd_mount1:1 ]<br>> Master/Slave Set: ms_drbd_mount2 [p_drbd_mount2] (unmanaged)<br>> p_drbd_mount2:0 (ocf::linbit:drbd): Master node2 (unmanaged)<br>> Stopped: [ p_drbd_mount2:1 ]<br>> Resource Group: g_core<br>> p_fs_mount1 (ocf::heartbeat:Filesystem): Started node2<br>> p_fs_mount2 (ocf::heartbeat:Filesystem): Stopped<br>> p_ip_nfs (ocf::heartbeat:IPaddr2): Stopped<br>> <br>> Failed actions:<br>> p_fs_mount2_start_0 (node=node2, call=78, rc=1, status=complete): unknown error<br>> p_drbd_mount2:1_monitor_30000 (node=node2, call=86, rc=7, status=complete): not running<br>> <br>> # mount2 resource cleanup p_fs_mount2<br>> ---> see cleanup-p_fs_mount2.log<br>> # mount2_mon -1<br>> Node node1 (d6fbcffe-ff23-4031-b108-428eabe19b7b): standby<br>> Online: [ node2 ]<br>> <br>> Master/Slave Set: ms_drbd_mount1 [p_drbd_mount1] (unmanaged)<br>> p_drbd_mount1:0 (ocf::linbit:drbd): Master node2 (unmanaged)<br>> Stopped: [ p_drbd_mount1:1 ]<br>> Master/Slave Set: ms_drbd_mount2 [p_drbd_mount2] (unmanaged)<br>> p_drbd_mount2:0 (ocf::linbit:drbd): Master node2 (unmanaged)<br>> Stopped: [ p_drbd_mount2:1 ]<br>> Resource Group: g_core<br>> p_fs_mount1 (ocf::heartbeat:Filesystem): Started node2<br>> p_fs_mount2 (ocf::heartbeat:Filesystem): Started node2<br>> p_ip_nfs (ocf::heartbeat:IPaddr2): Started node2<br>> Resource Group: g_apache<br>> p_fs_varsupport (ocf::heartbeat:Filesystem): Started node2<br>> p_fs_webdataeco (ocf::heartbeat:Filesystem): Started node2<br>> p_fs_webdatactl (ocf::heartbeat:Filesystem): Started node2<br>> p_fs_varwww (ocf::heartbeat:Filesystem): Started node2<br>> p_apache (ocf::heartbeat:apache): Started node2<br>> Resource Group: g_fileservers<br>> p_lsb_smb (lsb:smbd): Started node2<br>> p_lsb_nmb (lsb:nmbd): Started node2<br>> p_lsb_nfsserver (lsb:nfs-kernel-server): Started node2<br>> p_exportfs_mount1 (ocf::heartbeat:exportfs): Started node2<br>> p_exportfs_mount2 (ocf::heartbeat:exportfs): started node2 (unmanaged) FAILED<br>> <br>> # mount2 resource cleanup p_exportfs_mount2<br>> ---> see cleanup-p_exportfs_mount2.log<br>> <br>> <br><br>> cib: [6848]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']//lrm_resource[@id='p_drbd_mount1:0'] (origin=local/crmd/212, version=1.186.4): ok (rc=0)<br>> crmd: [6852]: info: delete_resource: Removing resource p_drbd_mount1:0 for 29626_mount2_resource (internal) on node2<br>> crmd: [6852]: info: lrm_remove_deleted_op: Removing op p_drbd_mount1:0_monitor_30000:151 for deleted resource p_drbd_mount1:0<br>> crmd: [6852]: info: notify_deleted: Notifying 29626_mount2_resource on node2 that p_drbd_mount1:0 was deleted<br>> crmd: [6852]: WARN: decode_transition_key: Bad UUID (mount2-resource-29626) in sscanf result (3) for 0:0:mount2-resource-29626<br>> crmd: [6852]: info: send_direct_ack: ACK'ing resource op p_drbd_mount1:0_delete_60000 from 0:0:mount2-resource-29626: lrm_invoke-lrmd-1327518060-185<br>> cib: [6848]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']//lrm_resource[@id='p_drbd_mount1:0'] (origin=local/crmd/213, version=1.186.5): ok (rc=0)<br>> crmd: [6852]: notice: do_lrm_invoke: Not creating resource for a delete event: (null)<br>> crmd: [6852]: info: notify_deleted: Notifying 29626_mount2_resource on node2 that p_drbd_mount1:1 was deleted<br>> crmd: [6852]: WARN: decode_transition_key: Bad UUID (mount2-resource-29626) in sscanf result (3) for 0:0:mount2-resource-29626<br>> crmd: [6852]: info: send_direct_ack: ACK'ing resource op p_drbd_mount1:1_delete_60000 from 0:0:mount2-resource-29626: lrm_invoke-lrmd-1327518060-186<br>> cib: [6848]: info: cib:diff: - <cib admin_epoch="1" epoch="186" num_updates="5" ><br>> cib: [6848]: info: cib:diff: - <configuration ><br>> cib: [6848]: info: cib:diff: - <mount2_config ><br>> cib: [6848]: info: cib:diff: - <cluster_property_set id="cib-bootstrap-options" ><br>> cib: [6848]: info: cib:diff: - <nvpair value="1327518038" id="cib-bootstrap-options-last-lrm-refresh" /><br>> cib: [6848]: info: cib:diff: - </cluster_property_set><br>> cib: [6848]: info: cib:diff: - </mount2_config><br>> cib: [6848]: info: cib:diff: - </configuration><br>> cib: [6848]: info: cib:diff: - </cib><br>> cib: [6848]: info: cib:diff: + <cib epoch="187" num_updates="1" admin_epoch="1" validate-with="pacemaker-1.2" mount2_feature_set="3.0.5" update-origin="node1" update-client="crmd" cib-last-written="Wed Jan 25 13:00:39 2012" have-quorum="1" dc-uuid="039e53da-dce8-4fd7-84bc-7261682529e8" ><br>> cib: [6848]: info: cib:diff: + <configuration ><br>> cib: [6848]: info: cib:diff: + <mount2_config ><br>> cib: [6848]: info: cib:diff: + <cluster_property_set id="cib-bootstrap-options" ><br>> cib: [6848]: info: cib:diff: + <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1327518060" /><br>> cib: [6848]: info: cib:diff: + </cluster_property_set><br>> cib: [6848]: info: cib:diff: + </mount2_config><br>> cib: [6848]: info: cib:diff: + </configuration><br>> cib: [6848]: info: cib:diff: + </cib><br>> cib: [6848]: info: cib_process_request: Operation complete: op cib_modify for section mount2_config (origin=local/crmd/215, version=1.187.1): ok (rc=0)<br>> cib: [6848]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']//lrm_resource[@id='p_drbd_mount1:1'] (origin=local/crmd/216, version=1.187.2): ok (rc=0)<br>> crmd: [6852]: WARN: decode_transition_key: Bad UUID (mount2-resource-29626) in sscanf result (3) for 0:0:mount2-resource-29626<br>> crmd: [6852]: info: send_direct_ack: ACK'ing resource op p_drbd_mount1:1_delete_60000 from 0:0:mount2-resource-29626: lrm_invoke-lrmd-1327518060-187<br>> crmd: [6852]: info: abort_transition_graph: te_update_diff:291 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=p_drbd_mount1:0_last_failure_0, magic=0:0;6:10:7:94503998-9c03-42e2-adff-57e4cfbc79a2, cib=1.186.5) : Resource op removal<br>> crmd: [6852]: info: abort_transition_graph: te_update_diff:291 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=p_drbd_mount1:0_last_failure_0, magic=0:0;6:10:7:94503998-9c03-42e2-adff-57e4cfbc79a2, cib=1.186.5) : Resource op removal<br>> crmd: [6852]: info: abort_transition_graph: te_update_diff:124 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=1.187.1) : Non-status change<br>> crmd: [6852]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]<br>> crmd: [6852]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.<br>> crmd: [6852]: info: do_pe_invoke: Query 219: Requesting the current CIB: S_POLICY_ENGINE<br>> crmd: [6852]: info: do_pe_invoke: Query 220: Requesting the current CIB: S_POLICY_ENGINE<br>> crmd: [6852]: info: do_pe_invoke: Query 221: Requesting the current CIB: S_POLICY_ENGINE<br>> cib: [6848]: info: cib_process_request: Operation complete: op cib_modify for section mount2_config (origin=local/crmd/218, version=1.187.3): ok (rc=0)<br>> crmd: [6852]: info: do_pe_invoke_callback: Invoking the PE: query=221, ref=pe_calc-dc-1327518060-188, seq=4, quorate=1<br>> lrmd: [6849]: info: rsc:p_drbd_mount1:0 probe[152] (pid 29627)<br>> crmd: [6852]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]<br>> pengine: [29654]: notice: unpack_config: On loss of CCM Quorum: Ignore<br>> crmd: [6852]: info: unpack_graph: Unpacked transition 14: 12 actions in 12 synapses<br>> pengine: [29654]: notice: unpack_rsc_op: Operation p_lsb_smb_last_failure_0 found resource p_lsb_smb active on node1<br>> crmd: [6852]: info: do_te_invoke: Processing graph 14 (ref=pe_calc-dc-1327518060-188) derived from /var/lib/pengine/pe-input-996.bz2<br>> pengine: [29654]: notice: unpack_rsc_op: Operation p_lsb_nmb_last_failure_0 found resource p_lsb_nmb active on node1<br>> crmd: [6852]: info: te_rsc_command: Initiating action 6: monitor p_drbd_mount1:0_monitor_0 on node2 (local)<br>> pengine: [29654]: notice: unpack_rsc_op: Operation p_lsb_nmb_last_failure_0 found resource p_lsb_nmb active on node2<br>> crmd: [6852]: info: do_lrm_rsc_op: Performing key=6:14:7:94503998-9c03-42e2-adff-57e4cfbc79a2 op=p_drbd_mount1:0_monitor_0 )<br>> pengine: [29654]: WARN: unpack_rsc_op: Processing failed op p_fs_mount2_last_failure_0 on node2: unknown error (1)<br>> crmd: [6852]: info: te_pseudo_action: Pseudo action 11 fired and confirmed<br>> pengine: [29654]: notice: unpack_rsc_op: Operation p_lsb_smb_last_failure_0 found resource p_lsb_smb active on node2<br>> crmd: [6852]: info: te_pseudo_action: Pseudo action 12 fired and confirmed<br>> pengine: [29654]: WARN: unpack_rsc_op: Processing failed op p_drbd_mount2:1_monitor_30000 on node2: not running (7)<br>> crmd: [6852]: info: te_pseudo_action: Pseudo action 9 fired and confirmed<br>> pengine: [29654]: notice: common_apply_stickiness: ms_drbd_mount2 can fail 999998 more times on node2 before being forced off<br>> pengine: last message repeated 2 times<br>> pengine: [29654]: WARN: common_apply_stickiness: Forcing p_fs_mount2 away from node2 after 1000000 failures (max=1000000)<br>> pengine: [29654]: notice: RecurringOp: Start recurring monitor (30s) for p_drbd_mount1:0 on node2<br>> pengine: [29654]: notice: RecurringOp: Start recurring monitor (30s) for p_drbd_mount1:0 on node2<br>> pengine: [29654]: notice: RecurringOp: Start recurring monitor (10s) for p_fs_varsupport on node2<br>> pengine: [29654]: notice: RecurringOp: Start recurring monitor (10s) for p_fs_mountbind2 on node2<br>> pengine: [29654]: notice: RecurringOp: Start recurring monitor (10s) for p_fs_mountbind3 on node2<br>> pengine: [29654]: notice: RecurringOp: Start recurring monitor (10s) for p_fs_varwww on node2<br>> pengine: [29654]: notice: RecurringOp: Start recurring monitor (40s) for p_apache on node2<br>> pengine: [29654]: notice: LogActions: Start p_drbd_mount1:0#011(node2)<br>> pengine: [29654]: notice: LogActions: Leave p_drbd_mount1:1#011(Stopped)<br>> pengine: [29654]: notice: LogActions: Leave p_drbd_mount2:0#011(Slave node2)<br>> pengine: [29654]: notice: LogActions: Leave p_drbd_mount2:1#011(Stopped)<br>> pengine: [29654]: notice: LogActions: Leave p_fs_mount1#011(Stopped)<br>> pengine: [29654]: notice: LogActions: Leave p_fs_mount2#011(Stopped)<br>> pengine: [29654]: notice: LogActions: Leave p_ip_nfs#011(Stopped)<br>> pengine: [29654]: notice: LogActions: Start p_fs_varsupport#011(node2)<br>> pengine: [29654]: notice: LogActions: Start p_fs_mountbind2#011(node2)<br>> pengine: [29654]: notice: LogActions: Start p_fs_mountbind3#011(node2)<br>> pengine: [29654]: notice: LogActions: Start p_fs_varwww#011(node2)<br>> pengine: [29654]: notice: LogActions: Start p_apache#011(node2)<br>> pengine: [29654]: notice: LogActions: Leave p_lsb_smb#011(Stopped)<br>> pengine: [29654]: notice: LogActions: Leave p_lsb_nmb#011(Stopped)<br>> pengine: [29654]: notice: LogActions: Leave p_lsb_nfsserver#011(Stopped)<br>> pengine: [29654]: notice: LogActions: Leave p_exportfs_mount1#011(Stopped)<br>> pengine: [29654]: notice: LogActions: Leave p_exportfs_mount2#011(Stopped)<br>> crmd: [6852]: info: config_query_callback: Shutdown escalation occurs after: 1200000ms<br>> crmd: [6852]: info: config_query_callback: Checking for expired actions every 900000ms<br>> lrmd: [6849]: info: operation monitor[152] on p_drbd_mount1:0 for client 6852: pid 29627 exited with return code 0<br>> crmd: [6852]: info: process_lrm_event: LRM operation p_drbd_mount1:0_monitor_0 (call=152, rc=0, cib-update=223, confirmed=true) ok<br>> crmd: [6852]: WARN: status_from_rc: Action 6 (p_drbd_mount1:0_monitor_0) on node2 failed (target: 7 vs. rc: 0): Error<br>> crmd: [6852]: info: abort_transition_graph: match_graph_event:277 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_drbd_mount1:0_last_failure_0, magic=0:0;6:14:7:94503998-9c03-42e2-adff-57e4cfbc79a2, cib=1.187.4) : Event failed<br>> crmd: [6852]: info: update_abort_priority: Abort priority upgraded from 0 to 1<br>> crmd: [6852]: info: update_abort_priority: Abort action done superceeded by restart<br>> crmd: [6852]: info: match_graph_event: Action p_drbd_mount1:0_monitor_0 (6) confirmed on node2 (rc=4)<br>> crmd: [6852]: info: te_rsc_command: Initiating action 5: probe_complete probe_complete on node2 (local) - no waiting<br>> crmd: [6852]: info: run_graph: ====================================================<br>> crmd: [6852]: notice: run_graph: Transition 14 (Complete=5, Pending=0, Fired=0, Skipped=3, Incomplete=4, Source=/var/lib/pengine/pe-input-996.bz2): Stopped<br>> crmd: [6852]: info: te_graph_trigger: Transition 14 is now complete<br>> crmd: [6852]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]<br>> crmd: [6852]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.<br>> crmd: [6852]: info: do_pe_invoke: Query 224: Requesting the current CIB: S_POLICY_ENGINE<br>> crmd: [6852]: info: do_pe_invoke_callback: Invoking the PE: query=224, ref=pe_calc-dc-1327518060-191, seq=4, quorate=1<br>> pengine: [29654]: notice: process_pe_message: Transition 14: PEngine Input stored in: /var/lib/pengine/pe-input-996.bz2<br>> pengine: [29654]: notice: unpack_config: On loss of CCM Quorum: Ignore<br>> pengine: [29654]: notice: unpack_rsc_op: Operation p_lsb_smb_last_failure_0 found resource p_lsb_smb active on node1<br>> pengine: [29654]: notice: unpack_rsc_op: Operation p_lsb_nmb_last_failure_0 found resource p_lsb_nmb active on node1<br>> pengine: [29654]: notice: unpack_rsc_op: Operation p_lsb_nmb_last_failure_0 found resource p_lsb_nmb active on node2<br>> pengine: [29654]: WARN: unpack_rsc_op: Processing failed op p_fs_mount2_last_failure_0 on node2: unknown error (1)<br>> pengine: [29654]: notice: unpack_rsc_op: Operation p_lsb_smb_last_failure_0 found resource p_lsb_smb active on node2<br>> pengine: [29654]: WARN: unpack_rsc_op: Processing failed op p_drbd_mount2:1_monitor_30000 on node2: not running (7)<br>> pengine: [29654]: notice: unpack_rsc_op: Operation p_drbd_mount1:0_last_failure_0 found resource p_drbd_mount1:0 active on node2<br>> pengine: [29654]: notice: common_apply_stickiness: ms_drbd_mount2 can fail 999998 more times on node2 before being forced off<br>> pengine: last message repeated 2 times<br>> pengine: [29654]: WARN: common_apply_stickiness: Forcing p_fs_mount2 away from node2 after 1000000 failures (max=1000000)<br>> pengine: [29654]: notice: RecurringOp: Start recurring monitor (30s) for p_drbd_mount1:0 on node2<br>> pengine: [29654]: notice: RecurringOp: Start recurring monitor (30s) for p_drbd_mount1:0 on node2<br>> pengine: [29654]: notice: RecurringOp: Start recurring monitor (10s) for p_fs_varsupport on node2<br>> pengine: [29654]: notice: RecurringOp: Start recurring monitor (10s) for p_fs_mountbind2 on node2<br>> pengine: [29654]: notice: RecurringOp: Start recurring monitor (10s) for p_fs_mountbind3 on node2<br>> pengine: [29654]: notice: RecurringOp: Start recurring monitor (10s) for p_fs_varwww on node2<br>> pengine: [29654]: notice: RecurringOp: Start recurring monitor (40s) for p_apache on node2<br>> pengine: [29654]: notice: LogActions: Leave p_drbd_mount1:0#011(Slave node2)<br>> pengine: [29654]: notice: LogActions: Leave p_drbd_mount1:1#011(Stopped)<br>> pengine: [29654]: notice: LogActions: Leave p_drbd_mount2:0#011(Slave node2)<br>> pengine: [29654]: notice: LogActions: Leave p_drbd_mount2:1#011(Stopped)<br>> pengine: [29654]: notice: LogActions: Leave p_fs_mount1#011(Stopped)<br>> pengine: [29654]: notice: LogActions: Leave p_fs_mount2#011(Stopped)<br>> pengine: [29654]: notice: LogActions: Leave p_ip_nfs#011(Stopped)<br>> crmd: [6852]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]<br>> pengine: [29654]: notice: LogActions: Start p_fs_varsupport#011(node2)<br>> crmd: [6852]: info: unpack_graph: Unpacked transition 15: 1 actions in 1 synapses<br>> pengine: [29654]: notice: LogActions: Start p_fs_mountbind2#011(node2)<br>> crmd: [6852]: info: do_te_invoke: Processing graph 15 (ref=pe_calc-dc-1327518060-191) derived from /var/lib/pengine/pe-input-997.bz2<br>> pengine: [29654]: notice: LogActions: Start p_fs_mountbind3#011(node2)<br>> lrmd: [6849]: info: rsc:p_drbd_mount1:0 monitor[153] (pid 29656)<br>> crmd: [6852]: info: te_rsc_command: Initiating action 9: monitor p_drbd_mount1:0_monitor_30000 on node2 (local)<br>> pengine: [29654]: notice: LogActions: Start p_fs_varwww#011(node2)<br>> crmd: [6852]: info: do_lrm_rsc_op: Performing key=9:15:0:94503998-9c03-42e2-adff-57e4cfbc79a2 op=p_drbd_mount1:0_monitor_30000 )<br>> pengine: [29654]: notice: LogActions: Start p_apache#011(node2)<br>> pengine: [29654]: notice: LogActions: Leave p_lsb_smb#011(Stopped)<br>> pengine: [29654]: notice: LogActions: Leave p_lsb_nmb#011(Stopped)<br>> pengine: [29654]: notice: LogActions: Leave p_lsb_nfsserver#011(Stopped)<br>> pengine: [29654]: notice: LogActions: Leave p_exportfs_mount1#011(Stopped)<br>> pengine: [29654]: notice: LogActions: Leave p_exportfs_mount2#011(Stopped)<br>> heartbeat: [6835]: WARN: Gmain_timeout_dispatch: Dispatch function for send local status took too long to execute: 60 ms (> 50 ms) (GSource: 0x225f400)<br>> pengine: [29654]: notice: process_pe_message: Transition 15: PEngine Input stored in: /var/lib/pengine/pe-input-997.bz2<br>> lrmd: [6849]: info: operation monitor[153] on p_drbd_mount1:0 for client 6852: pid 29656 exited with return code 0<br>> crmd: [6852]: info: process_lrm_event: LRM operation p_drbd_mount1:0_monitor_30000 (call=153, rc=0, cib-update=225, confirmed=false) ok<br>> crmd: [6852]: info: match_graph_event: Action p_drbd_mount1:0_monitor_30000 (9) confirmed on node2 (rc=0)<br>> crmd: [6852]: info: run_graph: ====================================================<br>> crmd: [6852]: notice: run_graph: Transition 15 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-997.bz2): Complete<br>> crmd: [6852]: info: te_graph_trigger: Transition 15 is now complete<br>> crmd: [6852]: info: notify_crmd: Transition 15 status: done - <null><br>> crmd: [6852]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]<br>> crmd: [6852]: info: do_state_transition: Starting PEngine Recheck Timer<br>> cib: [6848]: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=node1/node1/(null), version=1.187.5): ok (rc=0)<br>> cib: [6848]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']//lrm_resource[@id='p_drbd_mount1:0'] (origin=node1/crmd/34, version=1.187.5): ok (rc=0)<br>> crmd: [6852]: info: abort_transition_graph: te_update_diff:291 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=p_drbd_mount1:0_last_0, magic=0:7;6:13:7:94503998-9c03-42e2-adff-57e4cfbc79a2, cib=1.187.6) : Resource op removal<br>> crmd: [6852]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]<br>> crmd: [6852]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.<br>> crmd: [6852]: info: do_pe_invoke: Query 226: Requesting the current CIB: S_POLICY_ENGINE<br>> pengine: [29654]: notice: unpack_config: On loss of CCM Quorum: Ignore<br>> crmd: [6852]: info: do_pe_invoke_callback: Invoking the PE: query=226, ref=pe_calc-dc-1327518061-193, seq=4, quorate=1<br>> pengine: [29654]: notice: unpack_rsc_op: Operation p_lsb_smb_last_failure_0 found resource p_lsb_smb active on node1<br>> pengine: [29654]: notice: unpack_rsc_op: Operation p_lsb_nmb_last_failure_0 found resource p_lsb_nmb active on node1<br>> pengine: [29654]: notice: unpack_rsc_op: Operation p_lsb_nmb_last_failure_0 found resource p_lsb_nmb active on node2<br>> pengine: [29654]: WARN: unpack_rsc_op: Processing failed op p_fs_mount2_last_failure_0 on node2: unknown error (1)<br>> pengine: [29654]: notice: unpack_rsc_op: Operation p_lsb_smb_last_failure_0 found resource p_lsb_smb active on node2<br>> pengine: [29654]: WARN: unpack_rsc_op: Processing failed op p_drbd_mount2:1_monitor_30000 on node2: not running (7)<br>> pengine: [29654]: notice: unpack_rsc_op: Operation p_drbd_mount1:0_last_failure_0 found resource p_drbd_mount1:0 active on node2<br>> pengine: [29654]: notice: common_apply_stickiness: ms_drbd_mount2 can fail 999998 more times on node2 before being forced off<br>> pengine: last message repeated 2 times<br>> pengine: [29654]: WARN: common_apply_stickiness: Forcing p_fs_mount2 away from node2 after 1000000 failures (max=1000000)<br>> pengine: [29654]: notice: RecurringOp: Start recurring monitor (10s) for p_fs_varsupport on node2<br>> pengine: [29654]: notice: RecurringOp: Start recurring monitor (10s) for p_fs_mountbind2 on node2<br>> pengine: [29654]: notice: RecurringOp: Start recurring monitor (10s) for p_fs_mountbind3 on node2<br>> pengine: [29654]: notice: RecurringOp: Start recurring monitor (10s) for p_fs_varwww on node2<br>> pengine: [29654]: notice: RecurringOp: Start recurring monitor (40s) for p_apache on node2<br>> pengine: [29654]: notice: LogActions: Leave p_drbd_mount1:0#011(Slave node2)<br>> pengine: [29654]: notice: LogActions: Leave p_drbd_mount1:1#011(Stopped)<br>> pengine: [29654]: notice: LogActions: Leave p_drbd_mount2:0#011(Slave node2)<br>> pengine: [29654]: notice: LogActions: Leave p_drbd_mount2:1#011(Stopped)<br>> pengine: [29654]: notice: LogActions: Leave p_fs_mount1#011(Stopped)<br>> pengine: [29654]: notice: LogActions: Leave p_fs_mount2#011(Stopped)<br>> pengine: [29654]: notice: LogActions: Leave p_ip_nfs#011(Stopped)<br>> crmd: [6852]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]<br>> pengine: [29654]: notice: LogActions: Start p_fs_varsupport#011(node2)<br>> crmd: [6852]: info: unpack_graph: Unpacked transition 16: 0 actions in 0 synapses<br>> pengine: [29654]: notice: LogActions: Start p_fs_mountbind2#011(node2)<br>> crmd: [6852]: info: do_te_invoke: Processing graph 16 (ref=pe_calc-dc-1327518061-193) derived from /var/lib/pengine/pe-input-998.bz2<br>> pengine: [29654]: notice: LogActions: Start p_fs_mountbind3#011(node2)<br>> crmd: [6852]: info: run_graph: ====================================================<br>> pengine: [29654]: notice: LogActions: Start p_fs_varwww#011(node2)<br>> crmd: [6852]: notice: run_graph: Transition 16 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-998.bz2): Complete<br>> pengine: [29654]: notice: LogActions: Start p_apache#011(node2)<br>> crmd: [6852]: info: te_graph_trigger: Transition 16 is now complete<br>> pengine: [29654]: notice: LogActions: Leave p_lsb_smb#011(Stopped)<br>> crmd: [6852]: info: notify_crmd: Transition 16 status: done - <null><br>> pengine: [29654]: notice: LogActions: Leave p_lsb_nmb#011(Stopped)<br>> crmd: [6852]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]<br>> pengine: [29654]: notice: LogActions: Leave p_lsb_nfsserver#011(Stopped)<br>> crmd: [6852]: info: do_state_transition: Starting PEngine Recheck Timer<br>> pengine: [29654]: notice: LogActions: Leave p_exportfs_mount1#011(Stopped)<br>> pengine: [29654]: notice: LogActions: Leave p_exportfs_mount2#011(Stopped)<br>> pengine: [29654]: notice: process_pe_message: Transition 16: PEngine Input stored in: /var/lib/pengine/pe-input-998.bz2<br>> cib: [6848]: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=node1/node1/(null), version=1.187.5): ok (rc=0)<br>> cib: [6848]: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=node1/node1/(null), version=1.187.5): ok (rc=0)<br>> crmd: [6852]: info: abort_transition_graph: te_update_diff:291 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=p_drbd_mount1:0_last_0, magic=0:7;6:13:7:94503998-9c03-42e2-adff-57e4cfbc79a2, cib=1.187.7) : Resource op removal<br>> crmd: [6852]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]<br>> crmd: [6852]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.<br>> crmd: [6852]: info: do_pe_invoke: Query 227: Requesting the current CIB: S_POLICY_ENGINE<br>> cib: [6848]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']//lrm_resource[@id='p_drbd_mount1:0'] (origin=node1/crmd/35, version=1.187.7): ok (rc=0)<br>> crmd: [6852]: info: abort_transition_graph: te_update_diff:124 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=1.188.1) : Non-status change<br>> cib: [6848]: info: cib:diff: - <cib admin_epoch="1" epoch="187" num_updates="7" ><br>> crmd: [6852]: info: do_pe_invoke: Query 228: Requesting the current CIB: S_POLICY_ENGINE<br>> cib: [6848]: info: cib:diff: - <configuration ><br>> cib: [6848]: info: cib:diff: - <mount2_config ><br>> cib: [6848]: info: cib:diff: - <cluster_property_set id="cib-bootstrap-options" ><br>> cib: [6848]: info: cib:diff: - <nvpair value="1327518060" id="cib-bootstrap-options-last-lrm-refresh" /><br>> cib: [6848]: info: cib:diff: - </cluster_property_set><br>> cib: [6848]: info: cib:diff: - </mount2_config><br>> cib: [6848]: info: cib:diff: - </configuration><br>> cib: [6848]: info: cib:diff: - </cib><br>> cib: [6848]: info: cib:diff: + <cib epoch="188" num_updates="1" admin_epoch="1" validate-with="pacemaker-1.2" mount2_feature_set="3.0.5" update-origin="node2" update-client="crmd" cib-last-written="Wed Jan 25 13:01:00 2012" have-quorum="1" dc-uuid="039e53da-dce8-4fd7-84bc-7261682529e8" ><br>> cib: [6848]: info: cib:diff: + <configuration ><br>> cib: [6848]: info: cib:diff: + <mount2_config ><br>> cib: [6848]: info: cib:diff: + <cluster_property_set id="cib-bootstrap-options" ><br>> cib: [6848]: info: cib:diff: + <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1327518061" /><br>> cib: [6848]: info: cib:diff: + </cluster_property_set><br>> cib: [6848]: info: cib:diff: + </mount2_config><br>> cib: [6848]: info: cib:diff: + </configuration><br>> cib: [6848]: info: cib:diff: + </cib><br>> cib: [6848]: info: cib_process_request: Operation complete: op cib_modify for section mount2_config (origin=node1/crmd/37, version=1.188.1): ok (rc=0)<br>> cib: [6848]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']//lrm_resource[@id='p_drbd_mount1:1'] (origin=node1/crmd/38, version=1.188.2): ok (rc=0)<br>> cib: [6848]: info: cib_process_request: Operation complete: op cib_modify for section mount2_config (origin=node1/crmd/40, version=1.188.3): ok (rc=0)<br>> crmd: [6852]: info: do_pe_invoke_callback: Invoking the PE: query=228, ref=pe_calc-dc-1327518062-194, seq=4, quorate=1<br>> crmd: [6852]: info: config_query_callback: Shutdown escalation occurs after: 1200000ms<br>> crmd: [6852]: info: config_query_callback: Checking for expired actions every 900000ms<br>> pengine: [29654]: notice: unpack_config: On loss of CCM Quorum: Ignore<br>> pengine: [29654]: notice: unpack_rsc_op: Operation p_lsb_smb_last_failure_0 found resource p_lsb_smb active on node1<br>> pengine: [29654]: notice: unpack_rsc_op: Operation p_lsb_nmb_last_failure_0 found resource p_lsb_nmb active on node1<br>> pengine: [29654]: notice: unpack_rsc_op: Operation p_lsb_nmb_last_failure_0 found resource p_lsb_nmb active on node2<br>> pengine: [29654]: WARN: unpack_rsc_op: Processing failed op p_fs_mount2_last_failure_0 on node2: unknown error (1)<br>> pengine: [29654]: notice: unpack_rsc_op: Operation p_lsb_smb_last_failure_0 found resource p_lsb_smb active on node2<br>> pengine: [29654]: WARN: unpack_rsc_op: Processing failed op p_drbd_mount2:1_monitor_30000 on node2: not running (7)<br>> pengine: [29654]: notice: unpack_rsc_op: Operation p_drbd_mount1:0_last_failure_0 found resource p_drbd_mount1:0 active on node2<br>> pengine: [29654]: notice: common_apply_stickiness: ms_drbd_mount2 can fail 999998 more times on node2 before being forced off<br>> pengine: last message repeated 2 times<br>> pengine: [29654]: WARN: common_apply_stickiness: Forcing p_fs_mount2 away from node2 after 1000000 failures (max=1000000)<br>> pengine: [29654]: notice: RecurringOp: Start recurring monitor (10s) for p_fs_varsupport on node2<br>> pengine: [29654]: notice: RecurringOp: Start recurring monitor (10s) for p_fs_mountbind2 on node2<br>> pengine: [29654]: notice: RecurringOp: Start recurring monitor (10s) for p_fs_mountbind3 on node2<br>> pengine: [29654]: notice: RecurringOp: Start recurring monitor (10s) for p_fs_varwww on node2<br>> pengine: [29654]: notice: RecurringOp: Start recurring monitor (40s) for p_apache on node2<br>> pengine: [29654]: notice: LogActions: Leave p_drbd_mount1:0#011(Slave node2)<br>> pengine: [29654]: notice: LogActions: Leave p_drbd_mount1:1#011(Stopped)<br>> pengine: [29654]: notice: LogActions: Leave p_drbd_mount2:0#011(Slave node2)<br>> pengine: [29654]: notice: LogActions: Leave p_drbd_mount2:1#011(Stopped)<br>> pengine: [29654]: notice: LogActions: Leave p_fs_mount1#011(Stopped)<br>> pengine: [29654]: notice: LogActions: Leave p_fs_mount2#011(Stopped)<br>> pengine: [29654]: notice: LogActions: Leave p_ip_nfs#011(Stopped)<br>> pengine: [29654]: notice: LogActions: Start p_fs_varsupport#011(node2)<br>> crmd: [6852]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]<br>> pengine: [29654]: notice: LogActions: Start p_fs_mountbind2#011(node2)<br>> crmd: [6852]: info: unpack_graph: Unpacked transition 17: 3 actions in 3 synapses<br>> pengine: [29654]: notice: LogActions: Start p_fs_mountbind3#011(node2)<br>> crmd: [6852]: info: do_te_invoke: Processing graph 17 (ref=pe_calc-dc-1327518062-194) derived from /var/lib/pengine/pe-input-999.bz2<br>> pengine: [29654]: notice: LogActions: Start p_fs_varwww#011(node2)<br>> crmd: [6852]: info: te_rsc_command: Initiating action 6: monitor p_drbd_mount1:0_monitor_0 on node1<br>> pengine: [29654]: notice: LogActions: Start p_apache#011(node2)<br>> pengine: [29654]: notice: LogActions: Leave p_lsb_smb#011(Stopped)<br>> pengine: [29654]: notice: LogActions: Leave p_lsb_nmb#011(Stopped)<br>> pengine: [29654]: notice: LogActions: Leave p_lsb_nfsserver#011(Stopped)<br>> pengine: [29654]: notice: LogActions: Leave p_exportfs_mount1#011(Stopped)<br>> pengine: [29654]: notice: LogActions: Leave p_exportfs_mount2#011(Stopped)<br>> pengine: [29654]: notice: process_pe_message: Transition 17: PEngine Input stored in: /var/lib/pengine/pe-input-999.bz2<br>> crmd: [6852]: info: match_graph_event: Action p_drbd_mount1:0_monitor_0 (6) confirmed on node1 (rc=0)<br>> crmd: [6852]: info: te_rsc_command: Initiating action 5: probe_complete probe_complete on node1 - no waiting<br>> crmd: [6852]: info: te_pseudo_action: Pseudo action 4 fired and confirmed<br>> crmd: [6852]: info: run_graph: ====================================================<br>> crmd: [6852]: notice: run_graph: Transition 17 (Complete=3, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-999.bz2): Complete<br>> crmd: [6852]: info: te_graph_trigger: Transition 17 is now complete<br>> crmd: [6852]: info: notify_crmd: Transition 17 status: done - <null><br>> crmd: [6852]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]<br>> crmd: [6852]: info: do_state_transition: Starting PEngine Recheck Timer<br><br>> attrd: [6851]: notice: attrd_trigger_update: Sending flush op to all hosts for: fail-count-p_fs_mount2 (<null>)<br>> attrd: [6851]: notice: attrd_perform_update: Sent delete 631: node=039e53da-dce8-4fd7-84bc-7261682529e8, attr=fail-count-p_fs_mount2, id=<n/a>, set=(null), section=status<br>> cib: [6848]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']//lrm_resource[@id='p_fs_mount2'] (origin=local/crmd/249, version=1.191.2): ok (rc=0)<br>> crmd: [6852]: info: delete_resource: Removing resource p_fs_mount2 for 1045_mount2_resource (internal) on node2<br>> crmd: [6852]: info: notify_deleted: Notifying 1045_mount2_resource on node2 that p_fs_mount2 was deleted<br>> crmd: [6852]: WARN: decode_transition_key: Bad UUID (mount2-resource-1045) in sscanf result (3) for 0:0:mount2-resource-1045<br>> crmd: [6852]: info: send_direct_ack: ACK'ing resource op p_fs_mount2_delete_60000 from 0:0:mount2-resource-1045: lrm_invoke-lrmd-1327519062-213<br>> cib: [6848]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']//lrm_resource[@id='p_fs_mount2'] (origin=local/crmd/250, version=1.191.3): ok (rc=0)<br>> crmd: [6852]: info: abort_transition_graph: te_update_diff:178 - Triggered transition abort (complete=1, tag=transient_attributes, id=039e53da-dce8-4fd7-84bc-7261682529e8, magic=NA, cib=1.191.2) : Transient attribute: removal<br>> crmd: [6852]: info: abort_transition_graph: te_update_diff:291 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=p_fs_mount2_last_0, magic=0:0;4:66:0:7c970883-1991-4dfe-bcef-ea0b59dff8ed, cib=1.191.3) : Resource op removal<br>> crmd: [6852]: info: abort_transition_graph: te_update_diff:291 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=p_fs_mount2_last_0, magic=0:0;4:66:0:7c970883-1991-4dfe-bcef-ea0b59dff8ed, cib=1.191.3) : Resource op removal<br>> crmd: [6852]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]<br>> crmd: [6852]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.<br>> crmd: [6852]: info: do_pe_invoke: Query 253: Requesting the current CIB: S_POLICY_ENGINE<br>> crmd: [6852]: info: do_pe_invoke: Query 254: Requesting the current CIB: S_POLICY_ENGINE<br>> crmd: [6852]: info: do_pe_invoke: Query 255: Requesting the current CIB: S_POLICY_ENGINE<br>> cib: [6848]: info: cib:diff: - <cib admin_epoch="1" epoch="191" num_updates="3" ><br>> crmd: [6852]: info: abort_transition_graph: te_update_diff:124 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=1.192.1) : Non-status change<br>> cib: [6848]: info: cib:diff: - <configuration ><br>> crmd: [6852]: info: do_pe_invoke: Query 256: Requesting the current CIB: S_POLICY_ENGINE<br>> cib: [6848]: info: cib:diff: - <mount2_config ><br>> cib: [6848]: info: cib:diff: - <cluster_property_set id="cib-bootstrap-options" ><br>> cib: [6848]: info: cib:diff: - <nvpair value="1327518061" id="cib-bootstrap-options-last-lrm-refresh" /><br>> cib: [6848]: info: cib:diff: - </cluster_property_set><br>> cib: [6848]: info: cib:diff: - </mount2_config><br>> cib: [6848]: info: cib:diff: - </configuration><br>> cib: [6848]: info: cib:diff: - </cib><br>> cib: [6848]: info: cib:diff: + <cib epoch="192" num_updates="1" admin_epoch="1" validate-with="pacemaker-1.2" mount2_feature_set="3.0.5" update-origin="node2" update-client="cibadmin" cib-last-written="Wed Jan 25 13:17:01 2012" have-quorum="1" dc-uuid="039e53da-dce8-4fd7-84bc-7261682529e8" ><br>> cib: [6848]: info: cib:diff: + <configuration ><br>> cib: [6848]: info: cib:diff: + <mount2_config ><br>> cib: [6848]: info: cib:diff: + <cluster_property_set id="cib-bootstrap-options" ><br>> cib: [6848]: info: cib:diff: + <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1327519062" /><br>> cib: [6848]: info: cib:diff: + </cluster_property_set><br>> cib: [6848]: info: cib:diff: + </mount2_config><br>> cib: [6848]: info: cib:diff: + </configuration><br>> cib: [6848]: info: cib:diff: + </cib><br>> cib: [6848]: info: cib_process_request: Operation complete: op cib_modify for section mount2_config (origin=local/crmd/252, version=1.192.1): ok (rc=0)<br>> crmd: [6852]: info: do_pe_invoke_callback: Invoking the PE: query=256, ref=pe_calc-dc-1327519062-214, seq=4, quorate=1<br>> crmd: [6852]: info: config_query_callback: Shutdown escalation occurs after: 1200000ms<br>> crmd: [6852]: info: config_query_callback: Checking for expired actions every 900000ms<br>> pengine: [29654]: notice: unpack_config: On loss of CCM Quorum: Ignore<br>> pengine: [29654]: notice: unpack_rsc_op: Operation p_lsb_smb_last_failure_0 found resource p_lsb_smb active on node1<br>> pengine: [29654]: notice: unpack_rsc_op: Operation p_lsb_nmb_last_failure_0 found resource p_lsb_nmb active on node1<br>> pengine: [29654]: notice: unpack_rsc_op: Operation p_lsb_nmb_last_failure_0 found resource p_lsb_nmb active on node2<br>> pengine: [29654]: notice: unpack_rsc_op: Operation p_lsb_smb_last_failure_0 found resource p_lsb_smb active on node2<br>> pengine: [29654]: WARN: unpack_rsc_op: Processing failed op p_drbd_mount2:1_monitor_30000 on node2: not running (7)<br>> pengine: [29654]: notice: unpack_rsc_op: Operation p_drbd_mount1:0_last_failure_0 found resource p_drbd_mount1:0 active on node2<br>> pengine: [29654]: notice: common_apply_stickiness: ms_drbd_mount1 can fail 999999 more times on node2 before being forced off<br>> pengine: [29654]: notice: common_apply_stickiness: ms_drbd_mount1 can fail 999999 more times on node2 before being forced off<br>> pengine: [29654]: notice: common_apply_stickiness: ms_drbd_mount2 can fail 999997 more times on node2 before being forced off<br>> pengine: last message repeated 2 times<br>> pengine: [29654]: notice: can_be_master: Forcing unmanaged master p_drbd_mount2:0 to remain promoted on node2<br>> pengine: [29654]: notice: can_be_master: Forcing unmanaged master p_drbd_mount1:0 to remain promoted on node2<br>> pengine: [29654]: notice: RecurringOp: Start recurring monitor (10s) for p_fs_mount2 on node2<br>> pengine: [29654]: notice: RecurringOp: Start recurring monitor (30s) for p_ip_nfs on node2<br>> pengine: [29654]: notice: RecurringOp: Start recurring monitor (10s) for p_fs_mountbind1 on node2<br>> pengine: [29654]: notice: RecurringOp: Start recurring monitor (10s) for p_fs_mountbind2 on node2<br>> pengine: [29654]: notice: RecurringOp: Start recurring monitor (10s) for p_fs_mountbind3 on node2<br>> pengine: [29654]: notice: RecurringOp: Start recurring monitor (10s) for p_fs_varwww on node2<br>> pengine: [29654]: notice: RecurringOp: Start recurring monitor (40s) for p_apache on node2<br>> pengine: [29654]: notice: RecurringOp: Start recurring monitor (30s) for p_lsb_smb on node2<br>> pengine: [29654]: notice: RecurringOp: Start recurring monitor (30s) for p_lsb_nmb on node2<br>> pengine: [29654]: notice: RecurringOp: Start recurring monitor (30s) for p_lsb_nfsserver on node2<br>> pengine: [29654]: notice: RecurringOp: Start recurring monitor (10s) for p_exportfs_mount1 on node2<br>> pengine: [29654]: notice: RecurringOp: Start recurring monitor (10s) for p_exportfs_mount2 on node2<br>> pengine: [29654]: notice: LogActions: Leave p_drbd_mount1:0#011(Master unmanaged)<br>> pengine: [29654]: notice: LogActions: Leave p_drbd_mount1:1#011(Stopped unmanaged)<br>> pengine: [29654]: notice: LogActions: Leave p_drbd_mount2:0#011(Master unmanaged)<br>> pengine: [29654]: notice: LogActions: Leave p_drbd_mount2:1#011(Stopped unmanaged)<br>> pengine: [29654]: notice: LogActions: Leave p_fs_mount1#011(Started node2)<br>> pengine: [29654]: notice: LogActions: Start p_fs_mount2#011(node2)<br>> pengine: [29654]: notice: LogActions: Start p_ip_nfs#011(node2)<br>> pengine: [29654]: notice: LogActions: Start p_fs_mountbind1#011(node2)<br>> pengine: [29654]: notice: LogActions: Start p_fs_mountbind2#011(node2)<br>> pengine: [29654]: notice: LogActions: Start p_fs_mountbind3#011(node2)<br>> pengine: [29654]: notice: LogActions: Start p_fs_varwww#011(node2)<br>> pengine: [29654]: notice: LogActions: Start p_apache#011(node2)<br>> pengine: [29654]: notice: LogActions: Start p_lsb_smb#011(node2)<br>> pengine: [29654]: notice: LogActions: Start p_lsb_nmb#011(node2)<br>> pengine: [29654]: notice: LogActions: Start p_lsb_nfsserver#011(node2)<br>> pengine: [29654]: notice: LogActions: Start p_exportfs_mount1#011(node2)<br>> pengine: [29654]: notice: LogActions: Start p_exportfs_mount2#011(node2)<br>> crmd: [6852]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]<br>> crmd: [6852]: info: unpack_graph: Unpacked transition 25: 33 actions in 33 synapses<br>> crmd: [6852]: info: do_te_invoke: Processing graph 25 (ref=pe_calc-dc-1327519062-214) derived from /var/lib/pengine/pe-input-1007.bz2<br>> crmd: [6852]: info: te_pseudo_action: Pseudo action 74 fired and confirmed<br>> crmd: [6852]: info: te_rsc_command: Initiating action 8: monitor p_fs_mount2_monitor_0 on node2 (local)<br>> crmd: [6852]: info: do_lrm_rsc_op: Performing key=8:25:7:94503998-9c03-42e2-adff-57e4cfbc79a2 op=p_fs_mount2_monitor_0 )<br>> lrmd: [6849]: info: rsc:p_fs_mount2 probe[158] (pid 1047)<br>> lrmd: [6849]: info: operation monitor[158] on p_fs_mount2 for client 6852: pid 1047 exited with return code 7<br>> crmd: [6852]: info: process_lrm_event: LRM operation p_fs_mount2_monitor_0 (call=158, rc=7, cib-update=258, confirmed=true) not running<br>> crmd: [6852]: info: match_graph_event: Action p_fs_mount2_monitor_0 (8) confirmed on node2 (rc=0)<br>> crmd: [6852]: info: te_rsc_command: Initiating action 7: probe_complete probe_complete on node2 (local) - no waiting<br>> crmd: [6852]: info: te_pseudo_action: Pseudo action 5 fired and confirmed<br>> crmd: [6852]: info: te_rsc_command: Initiating action 70: start p_fs_mount2_start_0 on node2 (local)<br>> crmd: [6852]: info: do_lrm_rsc_op: Performing key=70:25:0:94503998-9c03-42e2-adff-57e4cfbc79a2 op=p_fs_mount2_start_0 )<br>> lrmd: [6849]: info: rsc:p_fs_mount2 start[159] (pid 1079)<br>> pengine: [29654]: notice: process_pe_message: Transition 25: PEngine Input stored in: /var/lib/pengine/pe-input-1007.bz2<br>> Filesystem[1079]: [1108]: INFO: Running start for /dev/drbd1 on /mnt/storage/Mount1<br>> lrmd: [6849]: info: RA output: (p_fs_mount2:start:stderr) FATAL: Module scsi_hostadapter not found.<br>> kernel: [433319.639211] kjournald starting. Commit interval 5 seconds<br>> kernel: [433319.639229] EXT3-fs warning: maximal mount count reached, running e2fsck is recommended<br>> kernel: [433319.639705] EXT3 FS on drbd1, internal journal<br>> kernel: [433319.639716] EXT3-fs: mounted filesystem with ordered data mode.<br>> lrmd: [6849]: info: operation start[159] on p_fs_mount2 for client 6852: pid 1079 exited with return code 0<br>> crmd: [6852]: info: process_lrm_event: LRM operation p_fs_mount2_start_0 (call=159, rc=0, cib-update=259, confirmed=true) ok<br>> crmd: [6852]: info: match_graph_event: Action p_fs_mount2_start_0 (70) confirmed on node2 (rc=0)<br>> crmd: [6852]: info: te_rsc_command: Initiating action 71: monitor p_fs_mount2_monitor_10000 on node2 (local)<br>> crmd: [6852]: info: do_lrm_rsc_op: Performing key=71:25:0:94503998-9c03-42e2-adff-57e4cfbc79a2 op=p_fs_mount2_monitor_10000 )<br>> lrmd: [6849]: info: rsc:p_fs_mount2 monitor[160] (pid 1133)<br>> crmd: [6852]: info: te_rsc_command: Initiating action 72: start p_ip_nfs_start_0 on node2 (local)<br>> crmd: [6852]: info: do_lrm_rsc_op: Performing key=72:25:0:94503998-9c03-42e2-adff-57e4cfbc79a2 op=p_ip_nfs_start_0 )<br>> lrmd: [6849]: info: rsc:p_ip_nfs start[161] (pid 1134)<br>> lrmd: [6849]: info: operation monitor[160] on p_fs_mount2 for client 6852: pid 1133 exited with return code 0<br>> crmd: [6852]: info: process_lrm_event: LRM operation p_fs_mount2_monitor_10000 (call=160, rc=0, cib-update=260, confirmed=false) ok<br>> IPaddr2[1134]: [1197]: INFO: ip -f inet addr add 10.52.4.8/16 brd 10.52.255.255 dev eth0<br>> crmd: [6852]: info: match_graph_event: Action p_fs_mount2_monitor_10000 (71) confirmed on node2 (rc=0)<br>> IPaddr2[1134]: [1200]: INFO: ip link set eth0 up<br>> IPaddr2[1134]: [1203]: INFO: /usr/lib/heartbeat/send_arp -i 200 -r 5 -p /var/run/resource-agents/send_arp-10.52.4.8 eth0 10.52.4.8 auto not_used not_used<br>> lrmd: [6849]: info: operation start[161] on p_ip_nfs for client 6852: pid 1134 exited with return code 0<br>> crmd: [6852]: info: process_lrm_event: LRM operation p_ip_nfs_start_0 (call=161, rc=0, cib-update=261, confirmed=true) ok<br>> crmd: [6852]: info: match_graph_event: Action p_ip_nfs_start_0 (72) confirmed on node2 (rc=0)<br>> crmd: [6852]: info: te_pseudo_action: Pseudo action 75 fired and confirmed<br>> crmd: [6852]: info: te_rsc_command: Initiating action 73: monitor p_ip_nfs_monitor_30000 on node2 (local)<br>> crmd: [6852]: info: do_lrm_rsc_op: Performing key=73:25:0:94503998-9c03-42e2-adff-57e4cfbc79a2 op=p_ip_nfs_monitor_30000 )<br>> lrmd: [6849]: info: rsc:p_ip_nfs monitor[162] (pid 1212)<br>> crmd: [6852]: info: te_pseudo_action: Pseudo action 88 fired and confirmed<br>> crmd: [6852]: info: te_rsc_command: Initiating action 78: start p_fs_mountbind1_start_0 on node2 (local)<br>> crmd: [6852]: info: do_lrm_rsc_op: Performing key=78:25:0:94503998-9c03-42e2-adff-57e4cfbc79a2 op=p_fs_mountbind1_start_0 )<br>> lrmd: [6849]: info: rsc:p_fs_mountbind1 start[163] (pid 1214)<br>> lrmd: [6849]: info: operation monitor[162] on p_ip_nfs for client 6852: pid 1212 exited with return code 0<br>> crmd: [6852]: info: process_lrm_event: LRM operation p_ip_nfs_monitor_30000 (call=162, rc=0, cib-update=262, confirmed=false) ok<br>> crmd: [6852]: info: match_graph_event: Action p_ip_nfs_monitor_30000 (73) confirmed on node2 (rc=0)<br>> Filesystem[1214]: [1270]: INFO: Running start for /mnt/storage/Mount2/Support on /var/support<br>> lrmd: [6849]: info: RA output: (p_fs_mountbind1:start:stderr) FATAL: Module scsi_hostadapter not found.<br>> lrmd: [6849]: info: operation start[163] on p_fs_mountbind1 for client 6852: pid 1214 exited with return code 0<br>> crmd: [6852]: info: process_lrm_event: LRM operation p_fs_mountbind1_start_0 (call=163, rc=0, cib-update=263, confirmed=true) ok<br>> crmd: [6852]: info: match_graph_event: Action p_fs_mountbind1_start_0 (78) confirmed on node2 (rc=0)<br>> crmd: [6852]: info: te_rsc_command: Initiating action 79: monitor p_fs_mountbind1_monitor_10000 on node2 (local)<br>> crmd: [6852]: info: do_lrm_rsc_op: Performing key=79:25:0:94503998-9c03-42e2-adff-57e4cfbc79a2 op=p_fs_mountbind1_monitor_10000 )<br>> lrmd: [6849]: info: rsc:p_fs_mountbind1 monitor[164] (pid 1286)<br>> crmd: [6852]: info: te_rsc_command: Initiating action 80: start p_fs_mountbind2_start_0 on node2 (local)<br>> crmd: [6852]: info: do_lrm_rsc_op: Performing key=80:25:0:94503998-9c03-42e2-adff-57e4cfbc79a2 op=p_fs_mountbind2_start_0 )<br>> lrmd: [6849]: info: rsc:p_fs_mountbind2 start[165] (pid 1287)<br>> lrmd: [6849]: info: operation monitor[164] on p_fs_mountbind1 for client 6852: pid 1286 exited with return code 0<br>> crmd: [6852]: info: process_lrm_event: LRM operation p_fs_mountbind1_monitor_10000 (call=164, rc=0, cib-update=264, confirmed=false) ok<br>> crmd: [6852]: info: match_graph_event: Action p_fs_mountbind1_monitor_10000 (79) confirmed on node2 (rc=0)<br>> Filesystem[1287]: [1347]: INFO: Running start for /mnt/storage/Mount2/Products/eco on /var/support/web/data/eco<br>> lrmd: [6849]: info: RA output: (p_fs_mountbind2:start:stderr) FATAL: Module scsi_hostadapter not found.<br>> lrmd: [6849]: info: operation start[165] on p_fs_mountbind2 for client 6852: pid 1287 exited with return code 0<br>> crmd: [6852]: info: process_lrm_event: LRM operation p_fs_mountbind2_start_0 (call=165, rc=0, cib-update=265, confirmed=true) ok<br>> crmd: [6852]: info: match_graph_event: Action p_fs_mountbind2_start_0 (80) confirmed on node2 (rc=0)<br>> crmd: [6852]: info: te_rsc_command: Initiating action 81: monitor p_fs_mountbind2_monitor_10000 on node2 (local)<br>> crmd: [6852]: info: do_lrm_rsc_op: Performing key=81:25:0:94503998-9c03-42e2-adff-57e4cfbc79a2 op=p_fs_mountbind2_monitor_10000 )<br>> lrmd: [6849]: info: rsc:p_fs_mountbind2 monitor[166] (pid 1363)<br>> crmd: [6852]: info: te_rsc_command: Initiating action 82: start p_fs_mountbind3_start_0 on node2 (local)<br>> crmd: [6852]: info: do_lrm_rsc_op: Performing key=82:25:0:94503998-9c03-42e2-adff-57e4cfbc79a2 op=p_fs_mountbind3_start_0 )<br>> lrmd: [6849]: info: rsc:p_fs_mountbind3 start[167] (pid 1364)<br>> lrmd: [6849]: info: operation monitor[166] on p_fs_mountbind2 for client 6852: pid 1363 exited with return code 0<br>> crmd: [6852]: info: process_lrm_event: LRM operation p_fs_mountbind2_monitor_10000 (call=166, rc=0, cib-update=266, confirmed=false) ok<br>> crmd: [6852]: info: match_graph_event: Action p_fs_mountbind2_monitor_10000 (81) confirmed on node2 (rc=0)<br>> Filesystem[1364]: [1423]: INFO: Running start for /mnt/storage/Mount2/ctl on /var/support/web/data/ctl<br>> lrmd: [6849]: info: RA output: (p_fs_mountbind3:start:stderr) FATAL: Module scsi_hostadapter not found.<br>> lrmd: [6849]: info: operation start[167] on p_fs_mountbind3 for client 6852: pid 1364 exited with return code 0<br>> crmd: [6852]: info: process_lrm_event: LRM operation p_fs_mountbind3_start_0 (call=167, rc=0, cib-update=267, confirmed=true) ok<br>> crmd: [6852]: info: match_graph_event: Action p_fs_mountbind3_start_0 (82) confirmed on node2 (rc=0)<br>> crmd: [6852]: info: te_rsc_command: Initiating action 83: monitor p_fs_mountbind3_monitor_10000 on node2 (local)<br>> crmd: [6852]: info: do_lrm_rsc_op: Performing key=83:25:0:94503998-9c03-42e2-adff-57e4cfbc79a2 op=p_fs_mountbind3_monitor_10000 )<br>> lrmd: [6849]: info: rsc:p_fs_mountbind3 monitor[168] (pid 1461)<br>> crmd: [6852]: info: te_rsc_command: Initiating action 84: start p_fs_varwww_start_0 on node2 (local)<br>> crmd: [6852]: info: do_lrm_rsc_op: Performing key=84:25:0:94503998-9c03-42e2-adff-57e4cfbc79a2 op=p_fs_varwww_start_0 )<br>> lrmd: [6849]: info: rsc:p_fs_varwww start[169] (pid 1463)<br>> lrmd: [6849]: info: operation monitor[168] on p_fs_mountbind3 for client 6852: pid 1461 exited with return code 0<br>> crmd: [6852]: info: process_lrm_event: LRM operation p_fs_mountbind3_monitor_10000 (call=168, rc=0, cib-update=268, confirmed=false) ok<br>> crmd: [6852]: info: match_graph_event: Action p_fs_mountbind3_monitor_10000 (83) confirmed on node2 (rc=0)<br>> Filesystem[1463]: [1534]: INFO: Running start for /mnt/storage/Mount2/var/www on /var/www<br>> lrmd: [6849]: info: RA output: (p_fs_varwww:start:stderr) FATAL: Module scsi_hostadapter not found.<br>> lrmd: [6849]: info: operation start[169] on p_fs_varwww for client 6852: pid 1463 exited with return code 0<br>> crmd: [6852]: info: process_lrm_event: LRM operation p_fs_varwww_start_0 (call=169, rc=0, cib-update=269, confirmed=true) ok<br>> crmd: [6852]: info: match_graph_event: Action p_fs_varwww_start_0 (84) confirmed on node2 (rc=0)<br>> crmd: [6852]: info: te_rsc_command: Initiating action 85: monitor p_fs_varwww_monitor_10000 on node2 (local)<br>> crmd: [6852]: info: do_lrm_rsc_op: Performing key=85:25:0:94503998-9c03-42e2-adff-57e4cfbc79a2 op=p_fs_varwww_monitor_10000 )<br>> lrmd: [6849]: info: rsc:p_fs_varwww monitor[170] (pid 1562)<br>> crmd: [6852]: info: te_rsc_command: Initiating action 86: start p_apache_start_0 on node2 (local)<br>> crmd: [6852]: info: do_lrm_rsc_op: Performing key=86:25:0:94503998-9c03-42e2-adff-57e4cfbc79a2 op=p_apache_start_0 )<br>> lrmd: [6849]: info: rsc:p_apache start[171] (pid 1566)<br>> lrmd: [6849]: info: operation monitor[170] on p_fs_varwww for client 6852: pid 1562 exited with return code 0<br>> crmd: [6852]: info: process_lrm_event: LRM operation p_fs_varwww_monitor_10000 (call=170, rc=0, cib-update=270, confirmed=false) ok<br>> crmd: [6852]: info: match_graph_event: Action p_fs_varwww_monitor_10000 (85) confirmed on node2 (rc=0)<br>> apache[1566]: [1643]: INFO: apache not running<br>> apache[1566]: [1646]: INFO: waiting for apache /etc/apache2/apache2.conf to come up<br>> cib: [6848]: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=node1/node1/(null), version=1.192.14): ok (rc=0)<br>> cib: [6848]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']//lrm_resource[@id='p_fs_mount2'] (origin=node1/crmd/44, version=1.192.14): ok (rc=0)<br>> crmd: [6852]: info: abort_transition_graph: te_update_diff:291 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_fs_mount2_last_0, magic=0:7;9:8:7:94503998-9c03-42e2-adff-57e4cfbc79a2, cib=1.192.15) : Resource op removal<br>> crmd: [6852]: info: update_abort_priority: Abort priority upgraded from 0 to 1000000<br>> crmd: [6852]: info: update_abort_priority: Abort action done superceeded by restart<br>> cib: [6848]: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=node1/node1/(null), version=1.192.14): ok (rc=0)<br>> cib: last message repeated 2 times<br>> cib: [6848]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']//lrm_resource[@id='p_fs_mount2'] (origin=node1/crmd/45, version=1.192.17): ok (rc=0)<br>> crmd: [6852]: info: abort_transition_graph: te_update_diff:291 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_fs_mount2_last_0, magic=0:7;9:8:7:94503998-9c03-42e2-adff-57e4cfbc79a2, cib=1.192.17) : Resource op removal<br>> crmd: [6852]: info: abort_transition_graph: te_update_diff:124 - Triggered transition abort (complete=0, tag=diff, id=(null), magic=NA, cib=1.193.1) : Non-status change<br>> cib: [6848]: info: cib:diff: - <cib admin_epoch="1" epoch="192" num_updates="17" ><br>> cib: [6848]: info: cib:diff: - <configuration ><br>> cib: [6848]: info: cib:diff: - <mount2_config ><br>> cib: [6848]: info: cib:diff: - <cluster_property_set id="cib-bootstrap-options" ><br>> cib: [6848]: info: cib:diff: - <nvpair value="1327519062" id="cib-bootstrap-options-last-lrm-refresh" /><br>> cib: [6848]: info: cib:diff: - </cluster_property_set><br>> cib: [6848]: info: cib:diff: - </mount2_config><br>> cib: [6848]: info: cib:diff: - </configuration><br>> cib: [6848]: info: cib:diff: - </cib><br>> cib: [6848]: info: cib:diff: + <cib epoch="193" num_updates="1" admin_epoch="1" validate-with="pacemaker-1.2" mount2_feature_set="3.0.5" update-origin="node2" update-client="crmd" cib-last-written="Wed Jan 25 13:17:42 2012" have-quorum="1" dc-uuid="039e53da-dce8-4fd7-84bc-7261682529e8" ><br>> cib: [6848]: info: cib:diff: + <configuration ><br>> cib: [6848]: info: cib:diff: + <mount2_config ><br>> cib: [6848]: info: cib:diff: + <cluster_property_set id="cib-bootstrap-options" ><br>> cib: [6848]: info: cib:diff: + <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1327519064" /><br>> cib: [6848]: info: cib:diff: + </cluster_property_set><br>> cib: [6848]: info: cib:diff: + </mount2_config><br>> cib: [6848]: info: cib:diff: + </configuration><br>> cib: [6848]: info: cib:diff: + </cib><br>> cib: [6848]: info: cib_process_request: Operation complete: op cib_modify for section mount2_config (origin=node1/crmd/47, version=1.193.1): ok (rc=0)<br>> crmd: [6852]: info: config_query_callback: Shutdown escalation occurs after: 1200000ms<br>> crmd: [6852]: info: config_query_callback: Checking for expired actions every 900000ms<br>> lrmd: [6849]: info: operation start[171] on p_apache for client 6852: pid 1566 exited with return code 0<br>> crmd: [6852]: info: process_lrm_event: LRM operation p_apache_start_0 (call=171, rc=0, cib-update=272, confirmed=true) ok<br>> crmd: [6852]: info: match_graph_event: Action p_apache_start_0 (86) confirmed on node2 (rc=0)<br>> crmd: [6852]: info: run_graph: ====================================================<br>> crmd: [6852]: notice: run_graph: Transition 25 (Complete=19, Pending=0, Fired=0, Skipped=14, Incomplete=0, Source=/var/lib/pengine/pe-input-1007.bz2): Stopped<br>> crmd: [6852]: info: te_graph_trigger: Transition 25 is now complete<br>> crmd: [6852]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]<br>> crmd: [6852]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.<br>> crmd: [6852]: info: do_pe_invoke: Query 273: Requesting the current CIB: S_POLICY_ENGINE<br>> crmd: [6852]: info: do_pe_invoke_callback: Invoking the PE: query=273, ref=pe_calc-dc-1327519065-230, seq=4, quorate=1<br>> pengine: [29654]: notice: unpack_config: On loss of CCM Quorum: Ignore<br>> pengine: [29654]: notice: unpack_rsc_op: Operation p_lsb_smb_last_failure_0 found resource p_lsb_smb active on node1<br>> pengine: [29654]: notice: unpack_rsc_op: Operation p_lsb_nmb_last_failure_0 found resource p_lsb_nmb active on node1<br>> pengine: [29654]: notice: unpack_rsc_op: Operation p_lsb_nmb_last_failure_0 found resource p_lsb_nmb active on node2<br>> pengine: [29654]: notice: unpack_rsc_op: Operation p_lsb_smb_last_failure_0 found resource p_lsb_smb active on node2<br>> pengine: [29654]: WARN: unpack_rsc_op: Processing failed op p_drbd_mount2:1_monitor_30000 on node2: not running (7)<br>> pengine: [29654]: notice: unpack_rsc_op: Operation p_drbd_mount1:0_last_failure_0 found resource p_drbd_mount1:0 active on node2<br>> pengine: [29654]: notice: common_apply_stickiness: ms_drbd_mount1 can fail 999999 more times on node2 before being forced off<br>> pengine: [29654]: notice: common_apply_stickiness: ms_drbd_mount1 can fail 999999 more times on node2 before being forced off<br>> pengine: [29654]: notice: common_apply_stickiness: ms_drbd_mount2 can fail 999997 more times on node2 before being forced off<br>> pengine: last message repeated 2 times<br>> pengine: [29654]: notice: can_be_master: Forcing unmanaged master p_drbd_mount2:0 to remain promoted on node2<br>> pengine: [29654]: notice: can_be_master: Forcing unmanaged master p_drbd_mount1:0 to remain promoted on node2<br>> pengine: [29654]: notice: RecurringOp: Start recurring monitor (40s) for p_apache on node2<br>> pengine: [29654]: notice: RecurringOp: Start recurring monitor (30s) for p_lsb_smb on node2<br>> pengine: [29654]: notice: RecurringOp: Start recurring monitor (30s) for p_lsb_nmb on node2<br>> pengine: [29654]: notice: RecurringOp: Start recurring monitor (30s) for p_lsb_nfsserver on node2<br>> pengine: [29654]: notice: RecurringOp: Start recurring monitor (10s) for p_exportfs_mount1 on node2<br>> pengine: [29654]: notice: RecurringOp: Start recurring monitor (10s) for p_exportfs_mount2 on node2<br>> pengine: [29654]: notice: LogActions: Leave p_drbd_mount1:0#011(Master unmanaged)<br>> pengine: [29654]: notice: LogActions: Leave p_drbd_mount1:1#011(Stopped unmanaged)<br>> pengine: [29654]: notice: LogActions: Leave p_drbd_mount2:0#011(Master unmanaged)<br>> pengine: [29654]: notice: LogActions: Leave p_drbd_mount2:1#011(Stopped unmanaged)<br>> pengine: [29654]: notice: LogActions: Leave p_fs_mount1#011(Started node2)<br>> pengine: [29654]: notice: LogActions: Leave p_fs_mount2#011(Started node2)<br>> pengine: [29654]: notice: LogActions: Leave p_ip_nfs#011(Started node2)<br>> pengine: [29654]: notice: LogActions: Leave p_fs_mountbind1#011(Started node2)<br>> pengine: [29654]: notice: LogActions: Leave p_fs_mountbind2#011(Started node2)<br>> pengine: [29654]: notice: LogActions: Leave p_fs_mountbind3#011(Started node2)<br>> pengine: [29654]: notice: LogActions: Leave p_fs_varwww#011(Started node2)<br>> pengine: [29654]: notice: LogActions: Leave p_apache#011(Started node2)<br>> pengine: [29654]: notice: LogActions: Start p_lsb_smb#011(node2)<br>> pengine: [29654]: notice: LogActions: Start p_lsb_nmb#011(node2)<br>> pengine: [29654]: notice: LogActions: Start p_lsb_nfsserver#011(node2)<br>> pengine: [29654]: notice: LogActions: Start p_exportfs_mount1#011(node2)<br>> pengine: [29654]: notice: LogActions: Start p_exportfs_mount2#011(node2)<br>> crmd: [6852]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]<br>> crmd: [6852]: info: unpack_graph: Unpacked transition 26: 16 actions in 16 synapses<br>> crmd: [6852]: info: do_te_invoke: Processing graph 26 (ref=pe_calc-dc-1327519065-230) derived from /var/lib/pengine/pe-input-1008.bz2<br>> crmd: [6852]: info: te_rsc_command: Initiating action 13: monitor p_fs_mount2_monitor_0 on node1<br>> crmd: [6852]: info: te_rsc_command: Initiating action 94: monitor p_apache_monitor_40000 on node2 (local)<br>> crmd: [6852]: info: do_lrm_rsc_op: Performing key=94:26:0:94503998-9c03-42e2-adff-57e4cfbc79a2 op=p_apache_monitor_40000 )<br>> lrmd: [6849]: info: rsc:p_apache monitor[172] (pid 1669)<br>> crmd: [6852]: info: te_pseudo_action: Pseudo action 109 fired and confirmed<br>> crmd: [6852]: info: te_rsc_command: Initiating action 99: start p_lsb_smb_start_0 on node2 (local)<br>> crmd: [6852]: info: do_lrm_rsc_op: Performing key=99:26:0:94503998-9c03-42e2-adff-57e4cfbc79a2 op=p_lsb_smb_start_0 )<br>> lrmd: [6849]: info: rsc:p_lsb_smb start[173] (pid 1670)<br>> lrmd: [1670]: WARN: For LSB init script, no additional parameters are needed.<br>> lrmd: [6849]: info: RA output: (p_lsb_smb:start:stdout) Rather than invoking init scripts through /etc/init.d, use the service(8)<br>> lrmd: [6849]: info: RA output: (p_lsb_smb:start:stdout) utility, e.g. service smbd start<br>> lrmd: [6849]: info: RA output: (p_lsb_smb:start:stdout) <br>> lrmd: [6849]: info: RA output: (p_lsb_smb:start:stdout) Since the script you are attempting to invoke has been converted to an<br>> lrmd: [6849]: info: RA output: (p_lsb_smb:start:stdout) Upstart job, you may also use the start(8) utility, e.g. start smbd<br>> lrmd: [6849]: info: RA output: (p_lsb_smb:start:stdout) smbd start/running, process 1690<br>> lrmd: [6849]: info: operation start[173] on p_lsb_smb for client 6852: pid 1670 exited with return code 0<br>> crmd: [6852]: info: process_lrm_event: LRM operation p_lsb_smb_start_0 (call=173, rc=0, cib-update=274, confirmed=true) ok<br>> crmd: [6852]: info: match_graph_event: Action p_lsb_smb_start_0 (99) confirmed on node2 (rc=0)<br>> crmd: [6852]: info: te_rsc_command: Initiating action 100: monitor p_lsb_smb_monitor_30000 on node2 (local)<br>> lrmd: [6849]: info: rsc:p_lsb_smb monitor[174] (pid 1697)<br>> crmd: [6852]: info: do_lrm_rsc_op: Performing key=100:26:0:94503998-9c03-42e2-adff-57e4cfbc79a2 op=p_lsb_smb_monitor_30000 )<br>> crmd: [6852]: info: te_rsc_command: Initiating action 101: start p_lsb_nmb_start_0 on node2 (local)<br>> crmd: [6852]: info: do_lrm_rsc_op: Performing key=101:26:0:94503998-9c03-42e2-adff-57e4cfbc79a2 op=p_lsb_nmb_start_0 )<br>> lrmd: [6849]: info: rsc:p_lsb_nmb start[175] (pid 1699)<br>> lrmd: [1699]: WARN: For LSB init script, no additional parameters are needed.<br>> lrmd: [6849]: info: operation monitor[174] on p_lsb_smb for client 6852: pid 1697 exited with return code 0<br>> crmd: [6852]: info: process_lrm_event: LRM operation p_lsb_smb_monitor_30000 (call=174, rc=0, cib-update=275, confirmed=false) ok<br>> crmd: [6852]: info: match_graph_event: Action p_lsb_smb_monitor_30000 (100) confirmed on node2 (rc=0)<br>> lrmd: [6849]: info: operation monitor[172] on p_apache for client 6852: pid 1669 exited with return code 0<br>> lrmd: [6849]: info: RA output: (p_lsb_nmb:start:stdout) Rather than invoking init scripts through /etc/init.d, use the service(8)#012utility, e.g. service nmbd start#012#012Since the script you are attempting to invoke has been converted to an#012Upstart job, you may also use the start(8) utility, e.g. start nmbd<br>> crmd: [6852]: info: process_lrm_event: LRM operation p_apache_monitor_40000 (call=172, rc=0, cib-update=276, confirmed=false) ok<br>> crmd: [6852]: info: match_graph_event: Action p_apache_monitor_40000 (94) confirmed on node2 (rc=0)<br>> pengine: [29654]: notice: process_pe_message: Transition 26: PEngine Input stored in: /var/lib/pengine/pe-input-1008.bz2<br>> smbd[1690]: [2012/01/25 13:17:45, 0] smbd/server.c:1115(main)<br>> lrmd: [6849]: info: RA output: (p_lsb_nmb:start:stdout) nmbd start/running, process 1730<br>> lrmd: [6849]: info: operation start[175] on p_lsb_nmb for client 6852: pid 1699 exited with return code 0<br>> crmd: [6852]: info: process_lrm_event: LRM operation p_lsb_nmb_start_0 (call=175, rc=0, cib-update=277, confirmed=true) ok<br>> crmd: [6852]: info: match_graph_event: Action p_lsb_nmb_start_0 (101) confirmed on node2 (rc=0)<br>> crmd: [6852]: info: te_rsc_command: Initiating action 102: monitor p_lsb_nmb_monitor_30000 on node2 (local)<br>> lrmd: [6849]: info: rsc:p_lsb_nmb monitor[176] (pid 1731)<br>> crmd: [6852]: info: do_lrm_rsc_op: Performing key=102:26:0:94503998-9c03-42e2-adff-57e4cfbc79a2 op=p_lsb_nmb_monitor_30000 )<br>> crmd: [6852]: info: te_rsc_command: Initiating action 103: start p_lsb_nfsserver_start_0 on node2 (local)<br>> crmd: [6852]: info: do_lrm_rsc_op: Performing key=103:26:0:94503998-9c03-42e2-adff-57e4cfbc79a2 op=p_lsb_nfsserver_start_0 )<br>> smbd[1690]: standard input is not a socket, assuming -D option<br>> lrmd: [6849]: info: rsc:p_lsb_nfsserver start[177] (pid 1732)<br>> lrmd: [1732]: WARN: For LSB init script, no additional parameters are needed.<br>> lrmd: [6849]: info: operation monitor[176] on p_lsb_nmb for client 6852: pid 1731 exited with return code 0<br>> crmd: [6852]: info: process_lrm_event: LRM operation p_lsb_nmb_monitor_30000 (call=176, rc=0, cib-update=278, confirmed=false) ok<br>> crmd: [6852]: info: match_graph_event: Action p_lsb_nmb_monitor_30000 (102) confirmed on node2 (rc=0)<br>> lrmd: [6849]: info: RA output: (p_lsb_nfsserver:start:stdout) * Exporting directories for NFS kernel daemon...<br>> lrmd: [6849]: info: RA output: (p_lsb_nfsserver:start:stdout) ...done.#012 * Starting NFS kernel daemon<br>> kernel: [433321.967109] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory<br>> kernel: [433321.967129] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery<br>> kernel: [433321.967142] NFSD: starting 10-second grace period<br>> lrmd: [6849]: info: RA output: (p_lsb_nfsserver:start:stdout) ...done.<br>> lrmd: [6849]: info: operation start[177] on p_lsb_nfsserver for client 6852: pid 1732 exited with return code 0<br>> crmd: [6852]: info: process_lrm_event: LRM operation p_lsb_nfsserver_start_0 (call=177, rc=0, cib-update=279, confirmed=true) ok<br>> crmd: [6852]: info: match_graph_event: Action p_lsb_nfsserver_start_0 (103) confirmed on node2 (rc=0)<br>> crmd: [6852]: info: te_rsc_command: Initiating action 104: monitor p_lsb_nfsserver_monitor_30000 on node2 (local)<br>> crmd: [6852]: info: do_lrm_rsc_op: Performing key=104:26:0:94503998-9c03-42e2-adff-57e4cfbc79a2 op=p_lsb_nfsserver_monitor_30000 )<br>> lrmd: [6849]: info: rsc:p_lsb_nfsserver monitor[178] (pid 1755)<br>> crmd: [6852]: info: te_rsc_command: Initiating action 105: start p_exportfs_mount1_start_0 on node2 (local)<br>> crmd: [6852]: info: do_lrm_rsc_op: Performing key=105:26:0:94503998-9c03-42e2-adff-57e4cfbc79a2 op=p_exportfs_mount1_start_0 )<br>> lrmd: [6849]: info: rsc:p_exportfs_mount1 start[179] (pid 1757)<br>> exportfs[1757]: [1762]: INFO: Directory /mnt/storage/Mount2 is not exported to 192.168.0.0/255.255.255.0 (stopped).<br>> lrmd: [6849]: info: operation monitor[178] on p_lsb_nfsserver for client 6852: pid 1755 exited with return code 0<br>> crmd: [6852]: info: process_lrm_event: LRM operation p_lsb_nfsserver_monitor_30000 (call=178, rc=0, cib-update=280, confirmed=false) ok<br>> exportfs[1757]: [1764]: INFO: Exporting file system ...<br>> crmd: [6852]: info: match_graph_event: Action p_lsb_nfsserver_monitor_30000 (104) confirmed on node2 (rc=0)<br>> exportfs[1757]: [1771]: INFO: exporting 192.168.0.0/255.255.255.0:/mnt/storage/Mount2<br>> exportfs[1757]: [1775]: INFO: File system exported<br>> lrmd: [6849]: info: operation start[179] on p_exportfs_mount1 for client 6852: pid 1757 exited with return code 0<br>> crmd: [6852]: info: process_lrm_event: LRM operation p_exportfs_mount1_start_0 (call=179, rc=0, cib-update=281, confirmed=true) ok<br>> crmd: [6852]: info: match_graph_event: Action p_exportfs_mount1_start_0 (105) confirmed on node2 (rc=0)<br>> crmd: [6852]: info: te_rsc_command: Initiating action 106: monitor p_exportfs_mount1_monitor_10000 on node2 (local)<br>> crmd: [6852]: info: do_lrm_rsc_op: Performing key=106:26:0:94503998-9c03-42e2-adff-57e4cfbc79a2 op=p_exportfs_mount1_monitor_10000 )<br>> lrmd: [6849]: info: rsc:p_exportfs_mount1 monitor[180] (pid 1780)<br>> crmd: [6852]: info: te_rsc_command: Initiating action 107: start p_exportfs_mount2_start_0 on node2 (local)<br>> crmd: [6852]: info: do_lrm_rsc_op: Performing key=107:26:0:94503998-9c03-42e2-adff-57e4cfbc79a2 op=p_exportfs_mount2_start_0 )<br>> lrmd: [6849]: info: rsc:p_exportfs_mount2 start[181] (pid 1781)<br>> exportfs[1781]: [1791]: INFO: Directory /mnt/storage/Mount1 is not exported to 192.168.0.0/255.255.255.0 (stopped).<br>> exportfs[1780]: [1790]: INFO: Directory /mnt/storage/Mount2 is exported to 192.168.0.0/255.255.255.0 (started).<br>> exportfs[1781]: [1793]: INFO: Exporting file system ...<br>> exportfs[1781]: [1801]: INFO: exporting 192.168.0.0/255.255.255.0:/mnt/storage/Mount1<br>> exportfs[1781]: [1805]: INFO: File system exported<br>> lrmd: [6849]: info: operation start[181] on p_exportfs_mount2 for client 6852: pid 1781 exited with return code 0<br>> crmd: [6852]: info: process_lrm_event: LRM operation p_exportfs_mount2_start_0 (call=181, rc=0, cib-update=282, confirmed=true) ok<br>> crmd: [6852]: info: match_graph_event: Action p_exportfs_mount2_start_0 (107) confirmed on node2 (rc=0)<br>> crmd: [6852]: info: te_pseudo_action: Pseudo action 110 fired and confirmed<br>> crmd: [6852]: info: te_rsc_command: Initiating action 108: monitor p_exportfs_mount2_monitor_10000 on node2 (local)<br>> crmd: [6852]: info: do_lrm_rsc_op: Performing key=108:26:0:94503998-9c03-42e2-adff-57e4cfbc79a2 op=p_exportfs_mount2_monitor_10000 )<br>> lrmd: [6849]: info: rsc:p_exportfs_mount2 monitor[182] (pid 1806)<br>> exportfs[1806]: [1811]: INFO: Directory /mnt/storage/Mount1 is exported to 192.168.0.0/255.255.255.0 (started).<br>> crmd: [6852]: info: match_graph_event: Action p_fs_mount2_monitor_0 (13) confirmed on node1 (rc=0)<br>> crmd: [6852]: info: te_rsc_command: Initiating action 12: probe_complete probe_complete on node1 - no waiting<br>> crmd: [6852]: info: te_pseudo_action: Pseudo action 11 fired and confirmed<br><br>> attrd: [6851]: notice: attrd_trigger_update: Sending flush op to all hosts for: fail-count-p_exportfs_mount2 (<null>)<br>> attrd: [6851]: notice: attrd_perform_update: Sent delete 648: node=039e53da-dce8-4fd7-84bc-7261682529e8, attr=fail-count-p_exportfs_mount2, id=<n/a>, set=(null), section=status<br>> cib: [6848]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']//lrm_resource[@id='p_exportfs_mount2'] (origin=local/crmd/294, version=1.193.21): ok (rc=0)<br>> crmd: [6852]: info: delete_resource: Removing resource p_exportfs_mount2 for 5885_mount2_resource (internal) on node2<br>> crmd: [6852]: info: notify_deleted: Notifying 5885_mount2_resource on node2 that p_exportfs_mount2 was deleted<br>> crmd: [6852]: WARN: decode_transition_key: Bad UUID (mount2-resource-5885) in sscanf result (3) for 0:0:mount2-resource-5885<br>> crmd: [6852]: info: send_direct_ack: ACK'ing resource op p_exportfs_mount2_delete_60000 from 0:0:mount2-resource-5885: lrm_invoke-lrmd-1327519232-253<br>> cib: [6848]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']//lrm_resource[@id='p_exportfs_mount2'] (origin=local/crmd/295, version=1.193.22): ok (rc=0)<br>> crmd: [6852]: info: abort_transition_graph: te_update_diff:178 - Triggered transition abort (complete=1, tag=transient_attributes, id=039e53da-dce8-4fd7-84bc-7261682529e8, magic=NA, cib=1.193.21) : Transient attribute: removal<br>> crmd: [6852]: info: abort_transition_graph: te_update_diff:291 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=p_exportfs_mount2_last_0, magic=0:0;107:26:0:94503998-9c03-42e2-adff-57e4cfbc79a2, cib=1.193.22) : Resource op removal<br>> crmd: [6852]: info: abort_transition_graph: te_update_diff:291 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=p_exportfs_mount2_last_0, magic=0:0;107:26:0:94503998-9c03-42e2-adff-57e4cfbc79a2, cib=1.193.22) : Resource op removal<br>> crmd: [6852]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]<br>> crmd: [6852]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.<br>> crmd: [6852]: info: do_pe_invoke: Query 298: Requesting the current CIB: S_POLICY_ENGINE<br>> crmd: [6852]: info: do_pe_invoke: Query 299: Requesting the current CIB: S_POLICY_ENGINE<br>> crmd: [6852]: info: do_pe_invoke: Query 300: Requesting the current CIB: S_POLICY_ENGINE<br>> cib: [6848]: info: cib:diff: - <cib admin_epoch="1" epoch="193" num_updates="22" ><br>> cib: [6848]: info: cib:diff: - <configuration ><br>> crmd: [6852]: info: abort_transition_graph: te_update_diff:124 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=1.194.1) : Non-status change<br>> cib: [6848]: info: cib:diff: - <mount2_config ><br>> crmd: [6852]: info: do_pe_invoke: Query 301: Requesting the current CIB: S_POLICY_ENGINE<br>> cib: [6848]: info: cib:diff: - <cluster_property_set id="cib-bootstrap-options" ><br>> cib: [6848]: info: cib:diff: - <nvpair value="1327519064" id="cib-bootstrap-options-last-lrm-refresh" /><br>> cib: [6848]: info: cib:diff: - </cluster_property_set><br>> cib: [6848]: info: cib:diff: - </mount2_config><br>> cib: [6848]: info: cib:diff: - </configuration><br>> cib: [6848]: info: cib:diff: - </cib><br>> cib: [6848]: info: cib:diff: + <cib epoch="194" num_updates="1" admin_epoch="1" validate-with="pacemaker-1.2" mount2_feature_set="3.0.5" update-origin="node1" update-client="crmd" cib-last-written="Wed Jan 25 13:17:45 2012" have-quorum="1" dc-uuid="039e53da-dce8-4fd7-84bc-7261682529e8" ><br>> cib: [6848]: info: cib:diff: + <configuration ><br>> cib: [6848]: info: cib:diff: + <mount2_config ><br>> cib: [6848]: info: cib:diff: + <cluster_property_set id="cib-bootstrap-options" ><br>> cib: [6848]: info: cib:diff: + <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1327519232" /><br>> cib: [6848]: info: cib:diff: + </cluster_property_set><br>> cib: [6848]: info: cib:diff: + </mount2_config><br>> cib: [6848]: info: cib:diff: + </configuration><br>> cib: [6848]: info: cib:diff: + </cib><br>> cib: [6848]: info: cib_process_request: Operation complete: op cib_modify for section mount2_config (origin=local/crmd/297, version=1.194.1): ok (rc=0)<br>> pengine: [29654]: notice: unpack_config: On loss of CCM Quorum: Ignore<br>> pengine: [29654]: notice: unpack_rsc_op: Operation p_lsb_smb_last_failure_0 found resource p_lsb_smb active on node1<br>> pengine: [29654]: notice: unpack_rsc_op: Operation p_lsb_nmb_last_failure_0 found resource p_lsb_nmb active on node1<br>> pengine: [29654]: notice: unpack_rsc_op: Operation p_lsb_nmb_last_failure_0 found resource p_lsb_nmb active on node2<br>> pengine: [29654]: notice: unpack_rsc_op: Operation p_lsb_smb_last_failure_0 found resource p_lsb_smb active on node2<br>> pengine: [29654]: WARN: unpack_rsc_op: Processing failed op p_drbd_mount2:1_monitor_30000 on node2: not running (7)<br>> pengine: [29654]: notice: unpack_rsc_op: Operation p_drbd_mount1:0_last_failure_0 found resource p_drbd_mount1:0 active on node2<br>> pengine: [29654]: notice: common_apply_stickiness: ms_drbd_mount1 can fail 999999 more times on node2 before being forced off<br>> pengine: [29654]: notice: common_apply_stickiness: ms_drbd_mount1 can fail 999999 more times on node2 before being forced off<br>> pengine: [29654]: notice: common_apply_stickiness: ms_drbd_mount2 can fail 999997 more times on node2 before being forced off<br>> pengine: last message repeated 2 times<br>> pengine: [29654]: notice: common_apply_stickiness: p_exportfs_mount1 can fail 999999 more times on node2 before being forced off<br>> pengine: [29654]: notice: can_be_master: Forcing unmanaged master p_drbd_mount2:0 to remain promoted on node2<br>> crmd: [6852]: info: do_pe_invoke_callback: Invoking the PE: query=301, ref=pe_calc-dc-1327519232-254, seq=4, quorate=1<br>> crmd: [6852]: info: config_query_callback: Shutdown escalation occurs after: 1200000ms<br>> pengine: [29654]: notice: can_be_master: Forcing unmanaged master p_drbd_mount1:0 to remain promoted on node2<br>> crmd: [6852]: info: config_query_callback: Checking for expired actions every 900000ms<br>> pengine: [29654]: notice: RecurringOp: Start recurring monitor (10s) for p_exportfs_mount2 on node2<br>> pengine: [29654]: notice: LogActions: Leave p_drbd_mount1:0#011(Master unmanaged)<br>> pengine: [29654]: notice: LogActions: Leave p_drbd_mount1:1#011(Stopped unmanaged)<br>> pengine: [29654]: notice: LogActions: Leave p_drbd_mount2:0#011(Master unmanaged)<br>> pengine: [29654]: notice: LogActions: Leave p_drbd_mount2:1#011(Stopped unmanaged)<br>> pengine: [29654]: notice: LogActions: Leave p_fs_mount1#011(Started node2)<br>> pengine: [29654]: notice: LogActions: Leave p_fs_mount2#011(Started node2)<br>> pengine: [29654]: notice: LogActions: Leave p_ip_nfs#011(Started node2)<br>> pengine: [29654]: notice: LogActions: Leave p_fs_mountbind1#011(Started node2)<br>> pengine: [29654]: notice: LogActions: Leave p_fs_mountbind2#011(Started node2)<br>> pengine: [29654]: notice: LogActions: Leave p_fs_mountbind3#011(Started node2)<br>> pengine: [29654]: notice: LogActions: Leave p_fs_varwww#011(Started node2)<br>> pengine: [29654]: notice: LogActions: Leave p_apache#011(Started node2)<br>> pengine: [29654]: notice: LogActions: Leave p_lsb_smb#011(Started node2)<br>> pengine: [29654]: notice: LogActions: Leave p_lsb_nmb#011(Started node2)<br>> pengine: [29654]: notice: LogActions: Leave p_lsb_nfsserver#011(Started node2)<br>> pengine: [29654]: notice: LogActions: Leave p_exportfs_mount1#011(Started node2)<br>> pengine: [29654]: notice: LogActions: Start p_exportfs_mount2#011(node2)<br>> crmd: [6852]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]<br>> crmd: [6852]: info: unpack_graph: Unpacked transition 34: 7 actions in 7 synapses<br>> crmd: [6852]: info: do_te_invoke: Processing graph 34 (ref=pe_calc-dc-1327519232-254) derived from /var/lib/pengine/pe-input-1016.bz2<br>> crmd: [6852]: info: te_pseudo_action: Pseudo action 113 fired and confirmed<br>> crmd: [6852]: info: te_rsc_command: Initiating action 19: monitor p_exportfs_mount2_monitor_0 on node2 (local)<br>> crmd: [6852]: info: do_lrm_rsc_op: Performing key=19:34:7:94503998-9c03-42e2-adff-57e4cfbc79a2 op=p_exportfs_mount2_monitor_0 )<br>> lrmd: [6849]: info: rsc:p_exportfs_mount2 probe[184] (pid 5887)<br>> exportfs[5887]: [5892]: INFO: Directory /mnt/storage/Mount1 is exported to 192.168.0.0/255.255.255.0s (started).<br>> pengine: [29654]: notice: process_pe_message: Transition 34: PEngine Input stored in: /var/lib/pengine/pe-input-1016.bz2<br>> cib: [6848]: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=node1/node1/(null), version=1.194.1): ok (rc=0)<br>> crmd: [6852]: info: abort_transition_graph: te_update_diff:291 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_exportfs_mount2_last_0, magic=0:7;20:8:7:94503998-9c03-42e2-adff-57e4cfbc79a2, cib=1.194.2) : Resource op removal<br>> crmd: [6852]: info: update_abort_priority: Abort priority upgraded from 0 to 1000000<br>> crmd: [6852]: info: update_abort_priority: Abort action done superceeded by restart<br>> cib: [6848]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']//lrm_resource[@id='p_exportfs_mount2'] (origin=node1/crmd/51, version=1.194.1): ok (rc=0)<br>> exportfs[5894]: [5899]: INFO: Directory /mnt/storage/Mount2 is exported to 192.168.0.0/255.255.255.0 (started).<br>> cib: [6848]: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=node1/node1/(null), version=1.194.1): ok (rc=0)<br>> crmd: [6852]: info: abort_transition_graph: te_update_diff:291 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_exportfs_mount2_last_0, magic=0:7;20:8:7:94503998-9c03-42e2-adff-57e4cfbc79a2, cib=1.194.2) : Resource op removal<br>> cib: [6848]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']//lrm_resource[@id='p_exportfs_mount2'] (origin=node1/crmd/52, version=1.194.2): ok (rc=0)<br>> crmd: [6852]: info: abort_transition_graph: te_update_diff:124 - Triggered transition abort (complete=0, tag=diff, id=(null), magic=NA, cib=1.195.1) : Non-status change<br>> cib: [6848]: info: cib:diff: - <cib admin_epoch="1" epoch="194" num_updates="3" ><br>> cib: [6848]: info: cib:diff: - <configuration ><br>> cib: [6848]: info: cib:diff: - <mount2_config ><br>> cib: [6848]: info: cib:diff: - <cluster_property_set id="cib-bootstrap-options" ><br>> cib: [6848]: info: cib:diff: - <nvpair value="1327519232" id="cib-bootstrap-options-last-lrm-refresh" /><br>> cib: [6848]: info: cib:diff: - </cluster_property_set><br>> cib: [6848]: info: cib:diff: - </mount2_config><br>> cib: [6848]: info: cib:diff: - </configuration><br>> cib: [6848]: info: cib:diff: - </cib><br>> cib: [6848]: info: cib:diff: + <cib epoch="195" num_updates="1" admin_epoch="1" validate-with="pacemaker-1.2" mount2_feature_set="3.0.5" update-origin="node2" update-client="crmd" cib-last-written="Wed Jan 25 13:20:32 2012" have-quorum="1" dc-uuid="039e53da-dce8-4fd7-84bc-7261682529e8" ><br>> cib: [6848]: info: cib:diff: + <configuration ><br>> cib: [6848]: info: cib:diff: + <mount2_config ><br>> cib: [6848]: info: cib:diff: + <cluster_property_set id="cib-bootstrap-options" ><br>> cib: [6848]: info: cib:diff: + <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1327519233" /><br>> cib: [6848]: info: cib:diff: + </cluster_property_set><br>> cib: [6848]: info: cib:diff: + </mount2_config><br>> cib: [6848]: info: cib:diff: + </configuration><br>> cib: [6848]: info: cib:diff: + </cib><br>> cib: [6848]: info: cib_process_request: Operation complete: op cib_modify for section mount2_config (origin=node1/crmd/54, version=1.195.1): ok (rc=0)<br>> crmd: [6852]: info: config_query_callback: Shutdown escalation occurs after: 1200000ms<br>> crmd: [6852]: info: config_query_callback: Checking for expired actions every 900000ms<br>> lrmd: [6849]: info: operation monitor[184] on p_exportfs_mount2 for client 6852: pid 5887 exited with return code 0<br>> crmd: [6852]: info: process_lrm_event: LRM operation p_exportfs_mount2_monitor_0 (call=184, rc=0, cib-update=304, confirmed=true) ok<br>> crmd: [6852]: WARN: status_from_rc: Action 19 (p_exportfs_mount2_monitor_0) on node2 failed (target: 7 vs. rc: 0): Error<br>> crmd: [6852]: info: abort_transition_graph: match_graph_event:277 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_exportfs_mount2_last_failure_0, magic=0:0;19:34:7:94503998-9c03-42e2-adff-57e4cfbc79a2, cib=1.195.3) : Event failed<br>> crmd: [6852]: info: match_graph_event: Action p_exportfs_mount2_monitor_0 (19) confirmed on node2 (rc=4)<br>> crmd: [6852]: info: te_rsc_command: Initiating action 18: probe_complete probe_complete on node2 (local) - no waiting<br>> crmd: [6852]: info: run_graph: ====================================================<br>> crmd: [6852]: notice: run_graph: Transition 34 (Complete=3, Pending=0, Fired=0, Skipped=4, Incomplete=0, Source=/var/lib/pengine/pe-input-1016.bz2): Stopped<br>> crmd: [6852]: info: te_graph_trigger: Transition 34 is now complete<br>> crmd: [6852]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]<br>> crmd: [6852]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.<br>> crmd: [6852]: info: do_pe_invoke: Query 305: Requesting the current CIB: S_POLICY_ENGINE<br>> crmd: [6852]: info: do_pe_invoke_callback: Invoking the PE: query=305, ref=pe_calc-dc-1327519247-257, seq=4, quorate=1<br>> pengine: [29654]: notice: unpack_config: On loss of CCM Quorum: Ignore<br>> pengine: [29654]: notice: unpack_rsc_op: Operation p_lsb_smb_last_failure_0 found resource p_lsb_smb active on node1<br>> pengine: [29654]: notice: unpack_rsc_op: Operation p_lsb_nmb_last_failure_0 found resource p_lsb_nmb active on node1<br>> pengine: [29654]: notice: unpack_rsc_op: Operation p_lsb_nmb_last_failure_0 found resource p_lsb_nmb active on node2<br>> pengine: [29654]: notice: unpack_rsc_op: Operation p_lsb_smb_last_failure_0 found resource p_lsb_smb active on node2<br>> pengine: [29654]: WARN: unpack_rsc_op: Processing failed op p_drbd_mount2:1_monitor_30000 on node2: not running (7)<br>> pengine: [29654]: notice: unpack_rsc_op: Operation p_drbd_mount1:0_last_failure_0 found resource p_drbd_mount1:0 active on node2<br>> pengine: [29654]: notice: unpack_rsc_op: Operation p_exportfs_mount2_last_failure_0 found resource p_exportfs_mount2 active on node2<br>> pengine: [29654]: notice: common_apply_stickiness: ms_drbd_mount1 can fail 999999 more times on node2 before being forced off<br>> pengine: [29654]: notice: common_apply_stickiness: ms_drbd_mount1 can fail 999999 more times on node2 before being forced off<br>> pengine: [29654]: notice: common_apply_stickiness: ms_drbd_mount2 can fail 999997 more times on node2 before being forced off<br>> pengine: last message repeated 2 times<br>> pengine: [29654]: notice: common_apply_stickiness: p_exportfs_mount1 can fail 999999 more times on node2 before being forced off<br>> pengine: [29654]: notice: can_be_master: Forcing unmanaged master p_drbd_mount2:0 to remain promoted on node2<br>> pengine: [29654]: notice: can_be_master: Forcing unmanaged master p_drbd_mount1:0 to remain promoted on node2<br>> pengine: [29654]: notice: RecurringOp: Start recurring monitor (10s) for p_exportfs_mount2 on node2<br>> pengine: [29654]: notice: LogActions: Leave p_drbd_mount1:0#011(Master unmanaged)<br>> pengine: [29654]: notice: LogActions: Leave p_drbd_mount1:1#011(Stopped unmanaged)<br>> pengine: [29654]: notice: LogActions: Leave p_drbd_mount2:0#011(Master unmanaged)<br>> pengine: [29654]: notice: LogActions: Leave p_drbd_mount2:1#011(Stopped unmanaged)<br>> pengine: [29654]: notice: LogActions: Leave p_fs_mount1#011(Started node2)<br>> pengine: [29654]: notice: LogActions: Leave p_fs_mount2#011(Started node2)<br>> pengine: [29654]: notice: LogActions: Leave p_ip_nfs#011(Started node2)<br>> pengine: [29654]: notice: LogActions: Leave p_fs_mountbind1#011(Started node2)<br>> pengine: [29654]: notice: LogActions: Leave p_fs_mountbind2#011(Started node2)<br>> pengine: [29654]: notice: LogActions: Leave p_fs_mountbind3#011(Started node2)<br>> pengine: [29654]: notice: LogActions: Leave p_fs_varwww#011(Started node2)<br>> pengine: [29654]: notice: LogActions: Leave p_apache#011(Started node2)<br>> pengine: [29654]: notice: LogActions: Leave p_lsb_smb#011(Started node2)<br>> pengine: [29654]: notice: LogActions: Leave p_lsb_nmb#011(Started node2)<br>> pengine: [29654]: notice: LogActions: Leave p_lsb_nfsserver#011(Started node2)<br>> pengine: [29654]: notice: LogActions: Leave p_exportfs_mount1#011(Started node2)<br>> pengine: [29654]: notice: LogActions: Leave p_exportfs_mount2#011(Started node2)<br>> crmd: [6852]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]<br>> crmd: [6852]: info: unpack_graph: Unpacked transition 35: 4 actions in 4 synapses<br>> crmd: [6852]: info: do_te_invoke: Processing graph 35 (ref=pe_calc-dc-1327519247-257) derived from /var/lib/pengine/pe-input-1017.bz2<br>> crmd: [6852]: info: te_rsc_command: Initiating action 113: monitor p_exportfs_mount2_monitor_10000 on node2 (local)<br>> crmd: [6852]: info: do_lrm_rsc_op: Performing key=113:35:0:94503998-9c03-42e2-adff-57e4cfbc79a2 op=p_exportfs_mount2_monitor_10000 )<br>> lrmd: [6849]: info: rsc:p_exportfs_mount2 monitor[185] (pid 6376)<br>> crmd: [6852]: info: te_rsc_command: Initiating action 18: monitor p_exportfs_mount2_monitor_0 on node1<br>> exportfs[6376]: [6381]: INFO: Directory /mnt/storage/Mount1 is exported to 192.168.0.0/255.255.255.0 (started).<br>> pengine: [29654]: notice: process_pe_message: Transition 35: PEngine Input stored in: /var/lib/pengine/pe-input-1017.bz2<br>> crmd: [6852]: info: match_graph_event: Action p_exportfs_mount2_monitor_0 (18) confirmed on node1 (rc=0)<br>> crmd: [6852]: info: te_rsc_command: Initiating action 17: probe_complete probe_complete on node1 - no waiting<br>> crmd: [6852]: info: te_pseudo_action: Pseudo action 16 fired and confirmed<br>> exportfs[6603]: [6608]: INFO: Directory /mnt/storage/Mount2 is exported to 192.168.0.0/255.255.255.0 (started).<br>> lrmd: [6849]: info: operation monitor[185] on p_exportfs_mount2 for client 6852: pid 6376 exited with return code 0<br>> crmd: [6852]: info: process_lrm_event: LRM operation p_exportfs_mount2_monitor_10000 (call=185, rc=0, cib-update=306, confirmed=false) ok<br>> crmd: [6852]: info: match_graph_event: Action p_exportfs_mount2_monitor_10000 (113) confirmed on node2 (rc=0)<br>> crmd: [6852]: info: run_graph: ====================================================<br>> crmd: [6852]: notice: run_graph: Transition 35 (Complete=4, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-1017.bz2): Complete<br>> crmd: [6852]: info: te_graph_trigger: Transition 35 is now complete<br>> crmd: [6852]: info: notify_crmd: Transition 35 status: done - <null><br>> crmd: [6852]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]<br>> crmd: [6852]: info: do_state_transition: Starting PEngine Recheck Timer<br>> exportfs[6909]: [6914]: INFO: Directory /mnt/storage/Mount1 is exported to 192.168.0.0/255.255.255.0 (started).<br>> exportfs[7358]: [7363]: INFO: Directory /mnt/storage/Mount2 is exported to 192.168.0.0/255.255.255.0 (started).<br>> exportfs[7424]: [7429]: INFO: Directory /mnt/storage/Mount1 is exported to 192.168.0.0/255.255.255.0 (started).<br>> ntpd[1213]: Listening on interface #10 eth0, 10.52.4.8#123 Enabled<br>> ntpd[1213]: new interface(s) found: waking up resolver<br><br>> _______________________________________________<br>> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org<br>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker<br>> <br>> Project Home: http://www.clusterlabs.org<br>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf<br>> Bugs: http://bugs.clusterlabs.org<br><br><br>_______________________________________________<br>Pacemaker mailing list: Pacemaker@oss.clusterlabs.org<br>http://oss.clusterlabs.org/mailman/listinfo/pacemaker<br><br>Project Home: http://www.clusterlabs.org<br>Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf<br>Bugs: http://bugs.clusterlabs.org<br></div><br></div></div></body></html>