sir,<br><br>I have set up a two node cluster in Ubuntu 9.1. I have added a cluster-ip using ocf:heartbeat:IPaddr2, clonned lsb script "postgresql-8.4" and also added a manually created script for slony database replication.<br>
<br>Now every thing works fine but I am not able to use the ocf resource scripts. I mean fail over is not taking place or else even resource is not even taking. My <a href="http://ha.cf">ha.cf</a> file and cib configuration is attached with this mail <br>
<br>My <a href="http://ha.cf">ha.cf</a> file<br><br>autojoin none<br>keepalive 2<br>deadtime 15<br>warntime 5<br>initdead 64<br>udpport 694<br>bcast eth0<br>auto_failback off<br>node node1<br>node node2<br>crm respawn<br>
use_logd yes<br clear="all"><br><br>My cib.xml configuration file in cli format:<br><br>node $id="3952b93e-786c-47d4-8c2f-a882e3d3d105" node2 \<br> attributes standby="off"<br>node $id="ac87f697-5b44-4720-a8af-12a6f2295930" node1 \<br>
attributes standby="off"<br>primitive pgsql lsb:postgresql-8.4 \<br> meta target-role="Started" resource-stickness="inherited" \<br> op monitor interval="15s" timeout="25s" on-fail="standby"<br>
primitive slony-fail lsb:slony_failover \<br> meta target-role="Started"<br>primitive vir-ip ocf:heartbeat:IPaddr2 \<br> params ip="192.168.10.10" nic="eth0" cidr_netmask="24" broadcast="192.168.10.255" \<br>
op monitor interval="15s" timeout="25s" on-fail="standby" \<br> meta target-role="Started"<br>clone pgclone pgsql \<br> meta notify="true" globally-unique="false" interleave="true" target-role="Started"<br>
colocation ip-with-slony inf: slony-fail vir-ip<br>order slony-b4-ip inf: vir-ip slony-fail<br>property $id="cib-bootstrap-options" \<br> dc-version="1.0.5-3840e6b5a305ccb803d29b468556739e75532d56" \<br>
cluster-infrastructure="Heartbeat" \<br> no-quorum-policy="ignore" \<br> stonith-enabled="false" \<br> last-lrm-refresh="1266488780"<br>rsc_defaults $id="rsc-options" \<br>
resource-stickiness="INFINITY"<br><br><br><br>I am assigning the cluster-ip (192.168.10.10) in eth0 with ip 192.168.10.129 in one machine and 192.168.10.130 in another machine. <br><br>When I pull out the eth0 interface cable fail-over is not taking place. <br>
<br>This is the log message i am getting while I pull out the cable:<br><br>"<span style="font-family: arial,helvetica,sans-serif;">Feb 18 16:55:58 node2 NetworkManager: <info> (eth0): carrier now OFF (device state 1</span>)"<br>
<br>and after a miniute or two <br><br>log snippet:<br>-------------------------------------------------------------------<br>Feb 18 16:57:37 node2 cib: [21940]: info: cib_stats: Processed 3 operations (13333.00us average, 0% utilization) in the last 10min<br>
Feb 18 17:02:53 node2 crmd: [21944]: info: crm_timer_popped: PEngine Recheck Timer (I_PE_CALC) just popped!<br>Feb 18 17:02:53 node2 crmd: [21944]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_TIMER_POPPED origin=crm_timer_popped ]<br>
Feb 18 17:02:53 node2 crmd: [21944]: WARN: do_state_transition: Progressed to state S_POLICY_ENGINE after C_TIMER_POPPED<br>Feb 18 17:02:53 node2 crmd: [21944]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.<br>
Feb 18 17:02:53 node2 crmd: [21944]: info: do_pe_invoke: Query 111: Requesting the current CIB: S_POLICY_ENGINE<br>Feb 18 17:02:53 node2 crmd: [21944]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1266492773-121, seq=2, quorate=1<br>
Feb 18 17:02:53 node2 pengine: [21982]: notice: unpack_config: On loss of CCM Quorum: Ignore<br>Feb 18 17:02:53 node2 pengine: [21982]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0<br>
Feb 18 17:02:53 node2 pengine: [21982]: info: determine_online_status: Node node2 is online<br>Feb 18 17:02:53 node2 pengine: [21982]: info: unpack_rsc_op: slony-fail_monitor_0 on node2 returned 0 (ok) instead of the expected value: 7 (not running)<br>
Feb 18 17:02:53 node2 pengine: [21982]: notice: unpack_rsc_op: Operation slony-fail_monitor_0 found resource slony-fail active on node2<br>Feb 18 17:02:53 node2 pengine: [21982]: info: unpack_rsc_op: pgsql:0_monitor_0 on node2 returned 0 (ok) instead of the expected value: 7 (not running)<br>
Feb 18 17:02:53 node2 pengine: [21982]: notice: unpack_rsc_op: Operation pgsql:0_monitor_0 found resource pgsql:0 active on node2<br>Feb 18 17:02:53 node2 pengine: [21982]: info: determine_online_status: Node node1 is online<br>
Feb 18 17:02:53 node2 pengine: [21982]: notice: native_print: vir-ip#011(ocf::heartbeat:IPaddr2):#011Started node2<br>Feb 18 17:02:53 node2 pengine: [21982]: notice: native_print: slony-fail#011(lsb:slony_failover):#011Started node2<br>
Feb 18 17:02:53 node2 pengine: [21982]: notice: clone_print: Clone Set: pgclone<br>Feb 18 17:02:53 node2 pengine: [21982]: notice: print_list: #011Started: [ node2 node1 ]<br>Feb 18 17:02:53 node2 pengine: [21982]: notice: RecurringOp: Start recurring monitor (15s) for pgsql:1 on node1<br>
Feb 18 17:02:53 node2 pengine: [21982]: notice: LogActions: Leave resource vir-ip#011(Started node2)<br>Feb 18 17:02:53 node2 pengine: [21982]: notice: LogActions: Leave resource slony-fail#011(Started node2)<br>Feb 18 17:02:53 node2 pengine: [21982]: notice: LogActions: Leave resource pgsql:0#011(Started node2)<br>
Feb 18 17:02:53 node2 pengine: [21982]: notice: LogActions: Leave resource pgsql:1#011(Started node1)<br>Feb 18 17:02:53 node2 crmd: [21944]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]<br>
Feb 18 17:02:53 node2 crmd: [21944]: info: unpack_graph: Unpacked transition 26: 1 actions in 1 synapses<br>Feb 18 17:02:53 node2 crmd: [21944]: info: do_te_invoke: Processing graph 26 (ref=pe_calc-dc-1266492773-121) derived from /var/lib/pengine/pe-input-125.bz2<br>
Feb 18 17:02:53 node2 crmd: [21944]: info: te_rsc_command: Initiating action 15: monitor pgsql:1_monitor_15000 on node1<br>Feb 18 17:02:53 node2 pengine: [21982]: ERROR: write_last_sequence: Cannout open series file /var/lib/pengine/pe-input.last for writing<br>
Feb 18 17:02:53 node2 pengine: [21982]: info: process_pe_message: Transition 26: PEngine Input stored in: /var/lib/pengine/pe-input-125.bz2<br>Feb 18 17:02:55 node2 crmd: [21944]: info: match_graph_event: Action pgsql:1_monitor_15000 (15) confirmed on node1 (rc=0)<br>
Feb 18 17:02:55 node2 crmd: [21944]: info: run_graph: ====================================================<br>Feb 18 17:02:55 node2 crmd: [21944]: notice: run_graph: Transition 26 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-125.bz2): Complete<br>
Feb 18 17:02:55 node2 crmd: [21944]: info: te_graph_trigger: Transition 26 is now complete<br>Feb 18 17:02:55 node2 crmd: [21944]: info: notify_crmd: Transition 26 status: done - <null><br>Feb 18 17:02:55 node2 crmd: [21944]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]<br>
Feb 18 17:02:55 node2 crmd: [21944]: info: do_state_transition: Starting PEngine Recheck Timer<br>------------------------------------------------------------------------------<br><br>Also I am not able to use the pgsql ocf script and hence I am using the init script and cloned it as I need to run it on both nodes for slony data base replication. <br>
<br>I am using the heartbeat and pacemaker debs from the updated ubuntu karmic repo. (Heartbeat 2.99)<br><br>Please check my configuration and tell me where I am missing....<img goomoji="33C" style="margin: 0pt 0.2ex; vertical-align: middle;" src="cid:33C@goomoji.gmail"><img goomoji="33A" style="margin: 0pt 0.2ex; vertical-align: middle;" src="cid:33A@goomoji.gmail"><img goomoji="33A" style="margin: 0pt 0.2ex; vertical-align: middle;" src="cid:33A@goomoji.gmail"><br>
-- <br>Regards,<br><br>Jayakrishnan. L<br><br>Visit: <a href="http://www.jayakrishnan.bravehost.com">www.jayakrishnan.bravehost.com</a><br><br>