[Pacemaker] Pacemaker Digest, Vol 52, Issue 44

neha chatrath nehachatrath at gmail.com
Mon Mar 19 10:10:52 EDT 2012


Hello,

Standard Jboss RA can not be used as master/slave.
I have used it to as "ms" as a workaround for a pacemaker bug in which
start of one resource leads to re-start of different clone resources.

I suppose this should not cause the problem am facing.

 Thanks and regards
Neha

On Mon, Mar 19, 2012 at 6:48 PM, <pacemaker-request at oss.clusterlabs.org>wrote:

> Send Pacemaker mailing list submissions to
>        pacemaker at oss.clusterlabs.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>        http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> or, via email, send a message with subject or body 'help' to
>        pacemaker-request at oss.clusterlabs.org
>
> You can reach the person managing the list at
>        pacemaker-owner at oss.clusterlabs.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Pacemaker digest..."
>
>
> Today's Topics:
>
>   1. Re: Promote of one resource leads to start of another
>      resource in heartbeat cluster (emmanuel segura)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Mon, 19 Mar 2012 14:23:19 +0100
> From: emmanuel segura <emi2fast at gmail.com>
> To: The Pacemaker cluster resource manager
>        <pacemaker at oss.clusterlabs.org>
> Subject: Re: [Pacemaker] Promote of one resource leads to start of
>        another resource in heartbeat cluster
> Message-ID:
>        <CAE7pJ3D2ZUYBA-TVcYT55WLggv-a+NzERH=3Cw94Nu9P+VF3Cw at mail.gmail.com
> >
> Content-Type: text/plain; charset="iso-8859-1"
>
> are you sure the ocf:heartbeat:jboss it can used as ms?
>
> because if remember well the script agent must has the support and promote
> option
>
> Sorry for my bad english
>
> http://linux-ha.org/doc/man-pages/re-ra-jboss.html
>
> Il giorno 19 marzo 2012 14:10, neha chatrath <nehachatrath at gmail.com> ha
> scritto:
>
> > Hello,
> > I have the following 2 node cluster configuration:
> >
> > "node $id="15f8a22d-9b1a-4ce3-bca2-05f654a9ed6a" cps2 \
> >         attributes standby="off"
> > node $id="d3088454-5ff3-4bcd-b94c-5a2567e2759b" cps1 \
> >         attributes standby="off"
> > primitive CPS ocf:heartbeat:jboss_cps \
> >         params jboss_home="/home/cluster/cps/jboss-5.1.0.GA/"
> > java_home="/usr/" run_opts="-c all -b 0.0.0.0 -g clusterCPS
> > -Djboss.service.binding.set=ports-01 -Djboss.messaging.ServerPeerID=01"
> > statusurl="http://127.0.0.1:8180" shutdown_opts="-s 127.0.0.1:1199"
> > pstring="clusterCPS" \
> >         op start interval="0" timeout="150" \
> >         op stop interval="0" timeout="240" \
> >         op monitor interval="30s" timeout="40s"
> > primitive ClusterIP ocf:heartbeat:IPaddr2 \
> >         params ip="192.168.114.150" cidr_netmask="32" nic="bond0:114:1" \
> >         op monitor interval="40" timeout="20" \
> >         meta target-role="Started"
> > primitive EMS ocf:heartbeat:jboss \
> >         params jboss_home="/home/cluster/cps/Jboss_EMS/jboss-5.1.0.GA"
> > java_home="/usr/" run_opts="-c all -b 0.0.0.0 -g clusterEMS"
> > pstring="clusterEMS" \
> >         op start interval="0" timeout="60" \
> >         op stop interval="0" timeout="240" \
> >         op monitor interval="30s" timeout="40s"
> > primitive LB ocf:ptt:lb_ptt \
> >         op monitor interval="40"
> > primitive NDB_MGMT ocf:ptt:NDB_MGM_RA \
> >         op monitor interval="120" timeout="120"
> > primitive NDB_VIP ocf:heartbeat:IPaddr2 \
> >         params ip="192.168.117.150" cidr_netmask="255.255.255.255"
> > nic="bond0.117:4" \
> >         op monitor interval="30" timeout="25"
> > primitive Rmgr ocf:ptt:RM_RA \
> >         op monitor interval="60" role="Master" timeout="30"
> > on-fail="restart" \
> >         op monitor interval="40" role="Slave" timeout="40"
> > on-fail="restart" \
> >         op start interval="0" role="Master" timeout="30" \
> >         op start interval="0" role="Slave" timeout="35"
> > primitive mysql ocf:ptt:MYSQLD_RA \
> >         op monitor interval="180" timeout="200" \
> >         op start interval="0" timeout="40"
> > primitive ndbd ocf:ptt:NDBD_RA \
> >         op monitor interval="120" timeout="120"
> > ms CPS_CLONE CPS \
> >         meta master-max="1" master-max-node="1" clone-max="2"
> > clone-node-max="1" interleave="true" notify="true"
> > ms ms_Rmgr Rmgr \
> >         meta master-max="1" master-max-node="1" clone-max="2"
> > clone-node-max="1" interleave="true" notify="true" target-role="Started"
> > ms ms_mysqld mysql \
> >         meta master-max="1" master-max-node="1" clone-max="2"
> > clone-node-max="1" interleave="true" notify="true"
> > clone EMS_CLONE EMS \
> >         meta globally-unique="false" clone-max="2" clone-node-max="1"
> > clone LB_CLONE LB \
> >         meta globally-unique="false" clone-max="2" clone-node-max="1"
> > target-role="Started"
> > clone ndbdclone ndbd \
> >         meta globally-unique="false" clone-max="2" clone-node-max="1"
> > colocation RM_with_ip inf: ms_Rmgr:Master ClusterIP
> > colocation ndb_vip-with-ndb_mgm inf: NDB_MGMT NDB_VIP
> > order RM-after-ip inf: ClusterIP ms_Rmgr
> > order cps-after-mysqld inf: ms_mysqld CPS_CLONE
> > order ip-after-mysqld inf: ms_mysqld ClusterIP
> > order lb-after-cps inf: CPS_CLONE LB_CLONE
> > order mysqld-after-ndbd inf: ndbdclone ms_mysqld
> > order ndb_mgm-after-ndb_vip inf: NDB_VIP NDB_MGMT
> > order ndbd-after-ndb_mgm inf: NDB_MGMT ndbdclone
> > property $id="cib-bootstrap-options" \
> >         dc-version="1.0.11-9af47ddebcad19e35a61b2a20301dc038018e8e8" \
> >    cluster-infrastructure="Heartbeat" \
> >         no-quorum-policy="ignore" \
> >         stonith-enabled="false"
> > rsc_defaults $id="rsc-options" \
> >         resource-stickiness="100" \
> >         migration_threshold="3"
> > "
> > When I brig down the active node in the cluster, ms_mysqld resource on
> the
> > standby node is promoted but another resource (ms_Rmgr) gets re-started.
> >
> > Following are excerpts form the logs:
> >
> > "Mar 19 18:09:58 CPS2 lrmd: [27576]: info: operation monitor[13] on
> > NDB_VIP for client 27579: pid 29532 exited with return code 0
> > Mar 19 18:10:06 CPS2 heartbeat: [27565]: WARN: node cps1: is dead
> > Mar 19 18:10:06 CPS2 heartbeat: [27565]: info: Link cps1:bond0.115 dead.
> > Mar 19 18:10:06 CPS2 ccm: [27574]: debug: recv msg status from cps1,
> > status:dead
> > Mar 19 18:10:06 CPS2 ccm: [27574]: debug: status of node cps1: active ->
> > dead
> > Mar 19 18:10:06 CPS2 ccm: [27574]: debug: recv msg CCM_TYPE_LEAVE from
> > cps1, status:[null ptr]
> > Mar 19 18:10:06 CPS2 ccm: [27574]: debug: quorum plugin: majority
> > Mar 19 18:10:06 CPS2 crmd: [27579]: notice: crmd_ha_status_callback:
> > Status update: Node cps1 now has status [dead] (DC=true)
> > Mar 19 18:10:06 CPS2 ccm: [27574]: debug: cluster:linux-ha,
> > member_count=1, member_quorum_votes=100
> > Mar 19 18:10:06 CPS2 crmd: [27579]: info: crm_update_peer_proc: cps1.ais
> > is now offline
> > .......
> > ......
> >
> > Mar 19 18:10:07 CPS2 pengine: [27584]: notice: LogActions: Start
> > ClusterIP    (cps2)
> > Mar 19 18:10:07 CPS2 pengine: [27584]: notice: LogActions: Leave
> > resource NDB_VIP     (Started cps2)
> > Mar 19 18:10:07 CPS2 pengine: [27584]: notice: LogActions: Leave
> > resource NDB_MGMT    (Started cps2)
> > Mar 19 18:10:07 CPS2 pengine: [27584]: notice: LogActions: Leave
> > resource ndbd:0      (Stopped)
> > Mar 19 18:10:07 CPS2 pengine: [27584]: notice: LogActions: Leave
> > resource ndbd:1      (Started cps2)
> > Mar 19 18:10:07 CPS2 pengine: [27584]: notice: LogActions: Leave
> > resource mysql:0     (Stopped)
> > Mar 19 18:10:07 CPS2 pengine: [27584]: notice: LogActions: Promote
> > mysql:1      (Slave -> Master cps2)
> > Mar 19 18:10:07 CPS2 pengine: [27584]: notice: LogActions: Leave
> > resource LB:0        (Stopped)
> > Mar 19 18:10:07 CPS2 pengine: [27584]: notice: LogActions: Leave
> > resource LB:1        (Started cps2)
> > Mar 19 18:10:07 CPS2 pengine: [27584]: notice: LogActions: Leave
> > resource EMS:0       (Stopped)
> > Mar 19 18:10:07 CPS2 pengine: [27584]: notice: LogActions: Leave
> > resource EMS:1       (Started cps2)
> > Mar 19 18:10:07 CPS2 pengine: [27584]: notice: LogActions: Leave
> > resource Rmgr:0      (Stopped)
> > Mar 19 18:10:07 CPS2 pengine: [27584]: notice: LogActions: Promote
> > Rmgr:1       (Slave -> Master cps2)
> > Mar 19 18:10:07 CPS2 pengine: [27584]: notice: LogActions: Leave
> > resource CPS:0       (Stopped)
> > Mar 19 18:10:07 CPS2 pengine: [27584]: notice: LogActions: Leave
> > resource CPS:1       (Slave cps2)
> > Mar 19 18:10:07 CPS2 crmd: [27579]: debug: s_crmd_fsa: Processing
> > I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE
> > origin=handle_response ]
> > Mar 19 18:10:07 CPS2 crmd: [27579]: debug: do_fsa_action:
> > actions:trace:        // A_LOG
> > Mar 19 18:10:07 CPS2 crmd: [27579]: info: do_state_transition: State
> > transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS
> > cause=C_IPC_MESSAGE origin=handle_response ]
> > Mar 19 18:10:07 CPS2 crmd: [27579]: debug: do_fsa_action:
> > actions:trace:        // A_DC_TIMER_STOP
> > Mar 19 18:10:07 CPS2 crmd: [27579]: debug: do_fsa_action:
> > actions:trace:        // A_INTEGRATE_TIMER_STOP
> > Mar 19 18:10:07 CPS2 crmd: [27579]: debug: do_fsa_action:
> > actions:trace:        // A_FINALIZE_TIMER_STOP
> > Mar 19 18:10:07 CPS2 crmd: [27579]: debug: do_fsa_action:
> > actions:trace:        // A_TE_INVOKE
> > Mar 19 18:10:07 CPS2 crmd: [27579]: info: unpack_graph: Unpacked
> > transition 4: 31 actions in 31 synapses
> > Mar 19 18:10:07 CPS2 crmd: [27579]: info: do_te_invoke: Processing graph
> 4
> > (ref=pe_calc-dc-1332160807-80) derived from
> > /usr/var/lib/pengine/pe-input-1596.bz2
> > Mar 19 18:10:07 CPS2 crmd: [27579]: info: te_rsc_command: Initiating
> > action 12: start ClusterIP_start_0 on cps2 (local)
> > Mar 19 18:10:07 CPS2 crmd: [27579]: info: do_lrm_rsc_op: Performing
> > key=12:4:0:6c3bbe48-a3be-404f-9ca9-04360dbe5be7 op=ClusterIP_start_0 )
> > Mar 19 18:10:07 CPS2 lrmd: [27576]: debug: on_msg_perform_op:2396:
> copying
> > parameters for rsc ClusterIP
> > Mar 19 18:10:07 CPS2 lrmd: [27576]: debug: on_msg_perform_op: add an
> > operation operation start[34] on ClusterIP for client 27579, its
> > parameters: cidr_netmask=[32] crm_feature_set=[3.0.1]
> > CRM_meta_timeout=[20000] nic=[bond0:114:1] ip=[192.168.114.150]  to the
> > operation list.
> > Mar 19 18:10:07 CPS2 crmd: [27579]: info: te_rsc_command: Initiating
> > action 5: cancel mysql:1_monitor_180000 on cps2 (local)
> > Mar 19 18:10:07 CPS2 crmd: [27579]: debug: do_lrm_invoke: PE requested op
> > mysql:1_monitor_180000 (call=NA) be cancelled
> > Mar 19 18:10:07 CPS2 crmd: [27579]: debug: cancel_op: Scheduling
> > mysql:1:29 for removal
> > Mar 19 18:10:07 CPS2 crmd: [27579]: debug: cancel_op: Cancelling op 29
> for
> > mysql:1 (mysql:1:29)
> > Mar 19 18:10:07 CPS2 lrmd: [27576]: info: cancel_op: operation
> monitor[29]
> > on mysql:1 for client 27579, its parameters: CRM_meta_clone=[1]
> > CRM_meta_notify_slave_resource=[ ] CRM_meta_notify_active_resource=[ ]
> > CRM_meta_notify_demote_uname=[ ] CRM_meta_notify_active_uname=[ ]
> > CRM_meta_master_node_max=[1] CRM_meta_notify_stop_resource=[ ]
> > CRM_meta_notify_master_resource=[ ] CRM_meta_clone_node_max=[1]
> > CRM_meta_notify_demote_resource=[ ] CRM_meta_clone_max=[2]
> > CRM_meta_notify_slave_uname=[ ] CRM_meta_notify=[true]
> > CRM_meta_master_max=[1] CRM_meta_notify_start_r cancelled
> > Mar 19 18:10:07 CPS2 lrmd: [27576]: debug: on_msg_cancel_op: operation 29
> > cancelled
> > Mar 19 18:10:07 CPS2 crmd: [27579]: debug: cancel_op: Op 29 for mysql:1
> > (mysql:1:29): cancelled
> > Mar 19 18:10:07 CPS2 crmd: [27579]: info: send_direct_ack: ACK'ing
> > resource op mysql:1_monitor_180000 from
> > 5:4:0:6c3bbe48-a3be-404f-9ca9-04360dbe5be7: lrm_invoke-lrmd-1332160807-83
> > Mar 19 18:10:07 CPS2 crmd: [27579]: info: process_te_message: Processing
> > (N)ACK lrm_invoke-lrmd-1332160807-83 from cps2
> > Mar 19 18:10:07 CPS2 crmd: [27579]: info: match_graph_event: Action
> > mysql:1_monitor_180000 (5) confirmed on cps2 (rc=0)
> > Mar 19 18:10:07 CPS2 crmd: [27579]: info: te_pseudo_action: Pseudo action
> > 42 fired and confirmed
> > Mar 19 18:10:07 CPS2 crmd: [27579]: info: te_rsc_command: Initiating
> > action 4: cancel Rmgr:1_monitor_40000 on cps2 (local)
> > Mar 19 18:10:07 CPS2 crmd: [27579]: debug: do_lrm_invoke: PE requested op
> > Rmgr:1_monitor_40000 (call=NA) be cancelled
> > Mar 19 18:10:07 CPS2 crmd: [27579]: debug: cancel_op: Scheduling
> Rmgr:1:33
> > for removal
> > Mar 19 18:10:07 CPS2 crmd: [27579]: debug: cancel_op: Cancelling op 33
> for
> > Rmgr:1 (Rmgr:1:33)
> > Mar 19 18:10:07 CPS2 lrmd: [27576]: info: cancel_op: operation
> monitor[33]
> > on Rmgr:1 for client 27579, its parameters: CRM_meta_clone=[1]
> > CRM_meta_notify_active_uname=[ ] CRM_meta_notify_slave_resource=[ ]
> > CRM_meta_notify_active_resource=[ ] CRM_meta_interval=[40000]
> > CRM_meta_notify_demote_uname=[ ] CRM_meta_globally_unique=[false]
> > CRM_meta_master_node_max=[1] CRM_meta_notify_stop_resource=[ ]
> > CRM_meta_notify_master_resource=[ ] CRM_meta_clone_node_max=[1]
> > CRM_meta_notify_demote_resource=[ ] CRM_meta_clone_max=[2]
> > CRM_meta_notify_start_resource=[Rmgr:0 Rmgr: cancelled
> > Mar 19 18:10:07 CPS2 lrmd: [27576]: debug: on_msg_cancel_op: operation 33
> > cancelled
> > Mar 19 18:10:07 CPS2 crmd: [27579]: debug: cancel_op: Op 33 for Rmgr:1
> > (Rmgr:1:33): cancelled
> > Mar 19 18:10:07 CPS2 crmd: [27579]: info: send_direct_ack: ACK'ing
> > resource op Rmgr:1_monitor_40000 from
> > 4:4:0:6c3bbe48-a3be-404f-9ca9-04360dbe5be7: lrm_invoke-lrmd-1332160807-85
> > Mar 19 18:10:07 CPS2 crmd: [27579]: info: process_te_message: Processing
> > (N)ACK lrm_invoke-lrmd-1332160807-85 from cps2
> > Mar 19 18:10:07 CPS2 crmd: [27579]: info: match_graph_event: Action
> > Rmgr:1_monitor_40000 (4) confirmed on cps2 (rc=0)
> > Mar 19 18:10:07 CPS2 crmd: [27579]: info: te_pseudo_action: Pseudo action
> > 71 fired and confirmed
> > Mar 19 18:10:07 CPS2 crmd: [27579]: info: te_pseudo_action: Pseudo action
> > 72 fired and confirmed
> > Mar 19 18:10:07 CPS2 crmd: [27579]: debug: run_graph: Transition 4
> > (Complete=0, Pending=1, Fired=6, Skipped=0, Incomplete=25,
> > Source=/usr/var/lib/pengine/pe-input-1596.bz2): In-progress
> > Mar 19 18:10:07 CPS2 crmd: [27579]: debug: delete_op_entry: async:
> Sending
> > delete op for mysql:1_monitor_180000 (call=29)
> > Mar 19 18:10:07 CPS2 crmd: [27579]: info: process_lrm_event: LRM
> operation
> > mysql:1_monitor_180000 (call=29, status=1, cib-update=0, confirmed=true)
> > Cancelled
> > Mar 19 18:10:07 CPS2 crmd: [27579]: debug: delete_op_entry: async:
> Sending
> > delete op for Rmgr:1_monitor_40000 (call=33)
> > Mar 19 18:10:07 CPS2 crmd: [27579]: info: process_lrm_event: LRM
> operation
> > Rmgr:1_monitor_40000 (call=33, status=1, cib-update=0, confirmed=true)
> > Cancelled
> > Mar 19 18:10:07 CPS2 crmd: [27579]: info: te_rsc_command: Initiating
> > action 146: notify mysql:1_pre_notify_promote_0 on cps2 (local)
> > Mar 19 18:10:07 CPS2 crmd: [27579]: info: do_lrm_rsc_op: Performing
> > key=146:4:0:6c3bbe48-a3be-404f-9ca9-04360dbe5be7 op=mysql:1_notify_0 )
> > Mar 19 18:10:07 CPS2 lrmd: [27576]: debug: on_msg_perform_op: add an
> > operation operation notify[35] on mysql:1 for client 27579, its
> parameters:
> > CRM_meta_clone=[1] CRM_meta_notify_stop_uname=[ ]
> > CRM_meta_notify_slave_resource=[mysql:1 ]
> CRM_meta_notify_active_resource=[
> > ] CRM_meta_notify_demote_uname=[ ] CRM_meta_master_node_max=[1]
> > CRM_meta_notify_stop_resource=[ ] CRM_meta_notify_master_resource=[ ]
> > CRM_meta_clone_node_max=[1] CRM_meta_clone_max=[2] CRM_meta_notify=[true]
> > CRM_meta_notify_start_resource=[ ] CRM_meta_notify_master_uname=[ ]
> > crm_feature_set=[3.0.1] CRM_meta_globally_u to the operation list.
> > Mar 19 18:10:07 CPS2 lrmd: [27576]: info: rsc:mysql:1 notify[35] (pid
> > 29570)
> > Mar 19 18:10:07 CPS2 crmd: [27579]: debug: run_graph: Transition 4
> > (Complete=5, Pending=2, Fired=1, Skipped=0, Incomplete=24,
> > Source=/usr/var/lib/pengine/pe-input-1596.bz2): In-progress
> > Mar 19 18:10:07 CPS2 crmd: [27579]: debug: get_xpath_object: No match for
> > //cib_update_result//diff-added//crm_config in
> > /notify/cib_update_result/diff
> > Mar 19 18:10:07 CPS2 crmd: [27579]: debug: te_update_diff: Processing
> diff
> > (cib_delete): 0.538.74 -> 0.538.75 (S_TRANSITION_ENGINE)
> > Mar 19 18:10:07 CPS2 crmd: [27579]: debug: te_update_diff: Deleted
> > lrm_rsc_op mysql:1_monitor_180000 on 15f8a22d-9b1a-4ce3-bca2-05f654a9ed6a
> > was for graph event 5
> > Mar 19 18:10:07 CPS2 crmd: [27579]: debug: get_xpath_object: No match for
> > //cib_update_result//diff-added//crm_config in
> > /notify/cib_update_result/diff
> > Mar 19 18:10:07 CPS2 crmd: [27579]: debug: te_update_diff: Processing
> diff
> > (cib_delete): 0.538.75 -> 0.538.76 (S_TRANSITION_ENGINE)
> > Mar 19 18:10:07 CPS2 crmd: [27579]: debug: te_update_diff: Deleted
> > lrm_rsc_op Rmgr:1_monitor_40000 on 15f8a22d-9b1a-4ce3-bca2-05f654a9ed6a
> was
> > for graph event 4
> > Mar 19 18:10:07 CPS2 crmd: [27579]: info: te_rsc_command: Initiating
> > action 27: promote mysql:1_promote_0 on cps2 (local)
> > Mar 19 18:10:07 CPS2 crmd: [27579]: info: do_lrm_rsc_op: Performing
> > key=27:4:0:6c3bbe48-a3be-404f-9ca9-04360dbe5be7 op=mysql:1_promote_0 )
> > Mar 19 18:10:07 CPS2 lrmd: [27576]: debug: on_msg_perform_op: add an
> > operation operation promote[36] on mysql:1 for client 27579, its
> > parameters: CRM_meta_clone=[1] CRM_meta_notify_slave_resource=[mysql:1 ]
> > CRM_meta_notify_active_resource=[ ] CRM_meta_notify_demote_uname=[ ]
> > CRM_meta_notify_start_resource=[ ] CRM_meta_master_node_max=[1]
> > CRM_meta_notify_stop_resource=[ ] CRM_meta_notify_master_resource=[ ]
> > CRM_meta_clone_node_max=[1] CRM_meta_clone_max=[2] CRM_meta_notify=[true]
> > CRM_meta_notify_master_uname=[ ] CRM_meta_master_max=[1]
> > crm_feature_set=[3.0.1] CRM_meta_globally_unique= to the operation list.
> > Mar 19 18:10:07 CPS2 lrmd: [27576]: info: rsc:mysql:1 promote[36] (pid
> > 29595)
> > Mar 19 18:10:07 CPS2 crmd: [27579]: debug: run_graph: Transition 4
> > (Complete=8, Pending=2, Fired=1, Skipped=0, Incomplete=21,
> > Source=/usr/var/lib/pengine/pe-input-1596.bz2): In-progress
> > Mar 19 18:10:07 CPS2 lrmd: [27576]: info: RA output:
> > (mysql:1:promote:stdout) Entering promote
> >
> > MYSQLD_RA(mysql:1)[29595]:      2012/03/19_18:10:07 INFO: :Neha:
> > Mar 19 18:10:07 CPS2 lrmd: [27576]: info: RA output:
> > (mysql:1:promote:stdout) Entering check state
> >
> > Mar 19 18:10:07 CPS2 lrmd: [27576]: info: RA output:
> > (mysql:1:promote:stdout) Check State: Returning 1
> >
> > Mar 19 18:10:07 CPS2 lrmd: [27576]: info: RA output:
> > (mysql:1:promote:stdout) promoting MYSQLD as Master!!!!!!!!!!!
> >
> > IPaddr2(ClusterIP)[29567]:      2012/03/19_18:10:07 INFO: Adding IPv4
> > address 192.168.114.150/32 with broadcast address 192.168.114.150 to
> > device bond0 (with label bond0:114:1)
> > Mar 19 18:10:07 CPS2 pengine: [27584]: info: process_pe_message:
> > Transition 4: PEngine Input stored in:
> > /usr/var/lib/pengine/pe-input-1596.bz2
> > IPaddr2(ClusterIP)[29567]:      2012/03/19_18:10:07 INFO: Bringing device
> > bond0 up
> > IPaddr2(ClusterIP)[29567]:      2012/03/19_18:10:07 INFO:
> > /usr/lib/heartbeat/send_arp -i 200 -r 5 -p
> > /usr/var/run/resource-agents/send_arp-192.168.114.150 bond0
> 192.168.114.150
> > auto not_used not_used
> > Mar 19 18:10:07 CPS2 lrmd: [27576]: info: Managed ClusterIP:start process
> > 29567 exited with return code 0.
> > Mar 19 18:10:07 CPS2 lrmd: [27576]: info: operation start[34] on
> ClusterIP
> > for client 27579: pid 29567 exited with return code 0
> > Mar 19 18:10:07 CPS2 crm_attribute: [29646]: info: Invoked: crm_attribute
> > -N CPS2 -n master-mysql:1 -l reboot -v 10
> > Mar 19 18:10:07 CPS2 crm_attribute: [29646]: debug:
> > init_client_ipc_comms_nodispatch: Attempting to talk on:
> > /usr/var/run/crm/cib_rw
> > Mar 19 18:10:07 CPS2 crm_attribute: [29646]: debug:
> > init_client_ipc_comms_nodispatch: Attempting to talk on:
> > /usr/var/run/crm/cib_callback
> > Mar 19 18:10:07 CPS2 crm_attribute: [29646]: debug:
> cib_native_signon_raw:
> > Connection to CIB successful
> >
> > .....
> > .....
> >
> > Mar 19 18:10:07 CPS2 crm_attribute: [29646]: info: attrd_lazy_update:
> > Connecting to cluster... 5 retries remaining
> > Mar 19 18:10:07 CPS2 crm_attribute: [29646]: debug:
> > init_client_ipc_comms_nodispatch: Attempting to talk on:
> > /usr/var/run/crm/attrd
> > Mar 19 18:10:07 CPS2 crm_attribute: [29646]: debug: attrd_update: Sent
> > update: master-mysql:1=10 for CPS2
> > Mar 19 18:10:07 CPS2 crm_attribute: [29646]: info: main: Update
> > master-mysql:1=10 sent via attrd
> > Mar 19 18:10:07 CPS2 crm_attribute: [29646]: debug: cib_native_signoff:
> > Signing out of the CIB Service
> > Mar 19 18:10:07 CPS2 crmd: [27579]: info: process_lrm_event: LRM
> operation
> > ClusterIP_start_0 (call=34, rc=0, cib-update=68, confirmed=true) ok
> > Mar 19 18:10:07 CPS2 attrd: [27578]: debug: attrd_local_callback: update
> > message from crm_attribute: master-mysql:1=10
> > Mar 19 18:10:07 CPS2 attrd: [27578]: debug: attrd_local_callback:
> > Supplied: 10, Current: 5, Stored: 5
> > Mar 19 18:10:07 CPS2 attrd: [27578]: debug: attrd_local_callback: New
> > value of master-mysql:1 is 10
> > Mar 19 18:10:07 CPS2 attrd: [27578]: info: attrd_trigger_update: Sending
> > flush op to all hosts for: master-mysql:1 (10)
> > Mar 19 18:10:07 CPS2 lrmd: [27576]: info: Managed mysql:1:promote process
> > 29595 exited with return code 0.
> > Mar 19 18:10:07 CPS2 lrmd: [27576]: info: operation promote[36] on
> mysql:1
> > for client 27579: pid 29595 exited with return code 0
> > Mar 19 18:10:07 CPS2 crmd: [27579]: info: process_lrm_event: LRM
> operation
> > mysql:1_promote_0 (call=36, rc=0, cib-update=69, confirmed=true) ok
> > Mar 19 18:10:07 CPS2 crmd: [27579]: debug: get_xpath_object: No match for
> > //cib_update_result//diff-added//crm_config in
> > /notify/cib_update_result/diff
> > Mar 19 18:10:07 CPS2 crmd: [27579]: debug: te_update_diff: Processing
> diff
> > (cib_modify): 0.538.77 -> 0.538.78 (S_TRANSITION_ENGINE)
> > Mar 19 18:10:07 CPS2 crmd: [27579]: info: match_graph_event: Action
> > ClusterIP_start_0 (12) confirmed on cps2 (rc=0)
> > Mar 19 18:10:07 CPS2 crmd: [27579]: info: te_rsc_command: Initiating
> > action 13: monitor ClusterIP_monitor_40000 on cps2 (local)
> > Mar 19 18:10:07 CPS2 crmd: [27579]: info: do_lrm_rsc_op: Performing
> > key=13:4:0:6c3bbe48-a3be-404f-9ca9-04360dbe5be7
> op=ClusterIP_monitor_40000 )
> > Mar 19 18:10:07 CPS2 lrmd: [27576]: debug: on_msg_perform_op: add an
> > operation operation monitor[37] on ClusterIP for client 27579, its
> > parameters: CRM_meta_name=[monitor] cidr_netmask=[32]
> > crm_feature_set=[3.0.1] CRM_meta_timeout=[20000]
> CRM_meta_interval=[40000]
> > nic=[bond0:114:1] ip=[192.168.114.150]  to the operation list.
> > Mar 19 18:10:07 CPS2 lrmd: [27576]: info: rsc:ClusterIP monitor[37] (pid
> > 29650)
> > Mar 19 18:10:07 CPS2 crmd: [27579]: info: te_pseudo_action: Pseudo action
> > 69 fired and confirmed
> > Mar 19 18:10:07 CPS2 crmd: [27579]: debug: run_graph: Transition 4
> > (Complete=9, Pending=2, Fired=2, Skipped=0, Incomplete=19,
> > Source=/usr/var/lib/pengine/pe-input-1596.bz2): In-progress
> > Mar 19 18:10:07 CPS2 crmd: [27579]: info: te_rsc_command: Initiating
> > action 65: start Rmgr:1_start_0 on cps2 (local)
> > Mar 19 18:10:07 CPS2 crmd: [27579]: info: do_lrm_rsc_op: Performing
> > key=65:4:0:6c3bbe48-a3be-404f-9ca9-04360dbe5be7 op=Rmgr:1_start_0 )
> > Mar 19 18:10:07 CPS2 lrmd: [27576]: debug: on_msg_perform_op:2396:
> copying
> > parameters for rsc Rmgr:1
> > Mar 19 18:10:07 CPS2 lrmd: [27576]: debug: on_msg_perform_op: add an
> > operation operation start[38] on Rmgr:1 for client 27579, its parameters:
> > CRM_meta_clone=[1] CRM_meta_notify_stop_uname=[ ]
> > CRM_meta_notify_slave_resource=[Rmgr:1 ]
> CRM_meta_notify_active_resource=[
> > ] CRM_meta_notify_demote_uname=[ ] CRM_meta_master_node_max=[1]
> > CRM_meta_notify_stop_resource=[ ] CRM_meta_notify_master_resource=[ ]
> > CRM_meta_clone_node_max=[1] CRM_meta_clone_max=[2] CRM_meta_notify=[true]
> > CRM_meta_notify_start_resource=[Rmgr:1 ] CRM_meta_notify_master_uname=[ ]
> > crm_feature_set=[3.0.1] CRM_meta_globall to the operation list.
> > Mar 19 18:10:07 CPS2 lrmd: [27576]: info: rsc:Rmgr:1 start[38] (pid
> 29651)
> > Mar 19 18:10:07 CPS2 crmd: [27579]: debug: run_graph: Transition 4
> > (Complete=10, Pending=3, Fired=1, Skipped=0, Incomplete=18,
> > Source=/usr/var/lib/pengine/pe-input-1596.bz2): In-progress
> > Mar 19 18:10:07 CPS2 RM: [29651]: debug: RM_RA:Entering Main
> >
> > Mar 19 18:10:07 CPS2 RM: [29651]: info: value of OCF_RESOURCE_INSTANCE is
> > Rmgr:1
> >
> > Mar 19 18:10:07 CPS2 RM: [29651]: info: envinorment STATE variable value
> > is slave
> >
> > """""
> >
> > Can somebody help in this?
> >
> > Thanks and regards
> > Neha
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > _______________________________________________
> > Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> > http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> >
> > Project Home: http://www.clusterlabs.org
> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > Bugs: http://bugs.clusterlabs.org
> >
> >
>
>
> --
> esta es mi vida e me la vivo hasta que dios quiera
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://oss.clusterlabs.org/pipermail/pacemaker/attachments/20120319/e8bfbe51/attachment.html
> >
>
> ------------------------------
>
> _______________________________________________
> Pacemaker mailing list
> Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
>
> End of Pacemaker Digest, Vol 52, Issue 44
> *****************************************
>



-- 
Cheers
Neha Chatrath
                          KEEP SMILING!!!!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.clusterlabs.org/pipermail/pacemaker/attachments/20120319/f0c3a01c/attachment-0002.html>


More information about the Pacemaker mailing list