[ClusterLabs] Disabling resources and adding apache instances

Vijay Partha vijaysarathy94 at gmail.com
Wed Aug 5 03:49:34 EDT 2015


Aug  5 08:50:32 vmx-occ-005 crmd[31755]:   notice: run_graph: Transition
110 (Complete=7, Pending=0, Fired=0, Skipped=0, Incomplete=0,
Source=/var/lib/pacemaker/pengine/pe-input-277.bz2): Complete
Aug  5 08:50:32 vmx-occ-005 crmd[31755]:   notice: do_state_transition:
State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS
cause=C_FSA_INTERNAL origin=notify_crmd ]
Aug  5 08:52:59 vmx-occ-005 crmd[31755]:   notice: do_state_transition:
State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC
cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Aug  5 08:52:59 vmx-occ-005 pengine[31754]:   notice: unpack_config: On
loss of CCM Quorum: Ignore
Aug  5 08:52:59 vmx-occ-005 pengine[31754]:   notice: LogActions: Move
ClusterIP#011(Started node1 -> node2)
Aug  5 08:52:59 vmx-occ-005 pengine[31754]:   notice: process_pe_message:
Calculated Transition 111: /var/lib/pacemaker/pengine/pe-input-278.bz2
Aug  5 08:52:59 vmx-occ-005 crmd[31755]:   notice: te_rsc_command:
Initiating action 6: stop ClusterIP_stop_0 on node1 (local)
Aug  5 08:52:59 vmx-occ-005 IPaddr2(ClusterIP)[9623]: INFO: IP status = ok,
IP_CIP=
Aug  5 08:52:59 vmx-occ-005 crmd[31755]:   notice: process_lrm_event:
Operation ClusterIP_stop_0: ok (node=node1, call=179, rc=0, cib-update=417,
confirmed=true)
Aug  5 08:52:59 vmx-occ-005 crmd[31755]:   notice: te_rsc_command:
Initiating action 7: start ClusterIP_start_0 on node2
Aug  5 08:52:59 vmx-occ-005 crmd[31755]:   notice: te_rsc_command:
Initiating action 8: monitor ClusterIP_monitor_30000 on node2
Aug  5 08:52:59 vmx-occ-005 crmd[31755]:   notice: run_graph: Transition
111 (Complete=4, Pending=0, Fired=0, Skipped=0, Incomplete=0,
Source=/var/lib/pacemaker/pengine/pe-input-278.bz2): Complete
Aug  5 08:52:59 vmx-occ-005 crmd[31755]:   notice: do_state_transition:
State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS
cause=C_FSA_INTERNAL origin=notify_crmd ]
Aug  5 08:53:17 vmx-occ-005 crmd[31755]:   notice: do_state_transition:
State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC
cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Aug  5 08:53:17 vmx-occ-005 pengine[31754]:   notice: unpack_config: On
loss of CCM Quorum: Ignore
Aug  5 08:53:17 vmx-occ-005 pengine[31754]:  warning:
common_apply_stickiness: Forcing WebSite away from node2 after 1000000
failures (max=1)
Aug  5 08:53:17 vmx-occ-005 pengine[31754]:   notice: LogActions: Start
WebSite#011(node1)
Aug  5 08:53:17 vmx-occ-005 pengine[31754]:   notice: process_pe_message:
Calculated Transition 112: /var/lib/pacemaker/pengine/pe-input-279.bz2
Aug  5 08:53:17 vmx-occ-005 crmd[31755]:   notice: te_rsc_command:
Initiating action 7: monitor WebSite_monitor_0 on node2
Aug  5 08:53:17 vmx-occ-005 crmd[31755]:   notice: te_rsc_command:
Initiating action 5: monitor WebSite_monitor_0 on node1 (local)
Aug  5 08:53:17 vmx-occ-005 crmd[31755]:  warning: status_from_rc: Action 7
(WebSite_monitor_0) on node2 failed (target: 7 vs. rc: 1): Error
Aug  5 08:53:17 vmx-occ-005 crmd[31755]:   notice: abort_transition_graph:
Transition aborted by WebSite_monitor_0 'create' on (null): Event failed
(magic=0:1;7:112:7:abd61174-dea6-4d66-9026-bbfb0f2f6eaf, cib=0.101.1,
source=match_graph_event:344, 0)
Aug  5 08:53:17 vmx-occ-005 crmd[31755]:  warning: status_from_rc: Action 7
(WebSite_monitor_0) on node2 failed (target: 7 vs. rc: 1): Error
Aug  5 08:53:17 vmx-occ-005 crmd[31755]:   notice: te_rsc_command:
Initiating action 6: probe_complete probe_complete-node2 on node2 - no
waiting
Aug  5 08:53:18 vmx-occ-005 apache(WebSite)[9679]: INFO: apache not running
Aug  5 08:53:18 vmx-occ-005 crmd[31755]:   notice: process_lrm_event:
Operation WebSite_monitor_0: not running (node=node1, call=183, rc=7,
cib-update=419, confirmed=true)
Aug  5 08:53:18 vmx-occ-005 crmd[31755]:   notice: te_rsc_command:
Initiating action 4: probe_complete probe_complete-node1 on node1 (local) -
no waiting
Aug  5 08:53:18 vmx-occ-005 crmd[31755]:   notice: run_graph: Transition
112 (Complete=4, Pending=0, Fired=0, Skipped=3, Incomplete=0,
Source=/var/lib/pacemaker/pengine/pe-input-279.bz2): Stopped
Aug  5 08:53:18 vmx-occ-005 pengine[31754]:   notice: unpack_config: On
loss of CCM Quorum: Ignore
Aug  5 08:53:18 vmx-occ-005 pengine[31754]:  warning:
unpack_rsc_op_failure: Processing failed op monitor for WebSite on node2:
unknown error (1)
Aug  5 08:53:18 vmx-occ-005 pengine[31754]:  warning:
unpack_rsc_op_failure: Processing failed op monitor for WebSite on node2:
unknown error (1)
Aug  5 08:53:18 vmx-occ-005 pengine[31754]:  warning:
common_apply_stickiness: Forcing WebSite away from node2 after 1000000
failures (max=1)
Aug  5 08:53:18 vmx-occ-005 pengine[31754]:   notice: LogActions: Recover
WebSite#011(Started node2 -> node1)
Aug  5 08:53:18 vmx-occ-005 pengine[31754]:   notice: process_pe_message:
Calculated Transition 113: /var/lib/pacemaker/pengine/pe-input-280.bz2
Aug  5 08:53:18 vmx-occ-005 crmd[31755]:   notice: te_rsc_command:
Initiating action 2: stop WebSite_stop_0 on node2
Aug  5 08:53:20 vmx-occ-005 crmd[31755]:   notice: te_rsc_command:
Initiating action 9: start WebSite_start_0 on node1 (local)
Aug  5 08:53:20 vmx-occ-005 apache(WebSite)[9714]: INFO: apache not running
Aug  5 08:53:20 vmx-occ-005 apache(WebSite)[9714]: INFO: waiting for apache
/etc/httpd/conf/httpd.conf to come up
Aug  5 08:53:21 vmx-occ-005 crmd[31755]:   notice: process_lrm_event:
Operation WebSite_start_0: ok (node=node1, call=184, rc=0, cib-update=421,
confirmed=true)
Aug  5 08:53:21 vmx-occ-005 crmd[31755]:   notice: te_rsc_command:
Initiating action 10: monitor WebSite_monitor_60000 on node1 (local)
Aug  5 08:53:21 vmx-occ-005 crmd[31755]:   notice: process_lrm_event:
Operation WebSite_monitor_60000: ok (node=node1, call=185, rc=0,
cib-update=422, confirmed=false)
Aug  5 08:53:21 vmx-occ-005 crmd[31755]:   notice: run_graph: Transition
113 (Complete=4, Pending=0, Fired=0, Skipped=0, Incomplete=0,
Source=/var/lib/pacemaker/pengine/pe-input-280.bz2): Complete
Aug  5 08:53:21 vmx-occ-005 crmd[31755]:   notice: do_state_transition:
State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS
cause=C_FSA_INTERNAL origin=notify_crmd ]
Aug  5 08:53:45 vmx-occ-005 crmd[31755]:   notice: do_state_transition:
State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC
cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Aug  5 08:53:45 vmx-occ-005 pengine[31754]:   notice: unpack_config: On
loss of CCM Quorum: Ignore
Aug  5 08:53:45 vmx-occ-005 pengine[31754]:  warning:
unpack_rsc_op_failure: Processing failed op monitor for WebSite on node2:
unknown error (1)
Aug  5 08:53:45 vmx-occ-005 pengine[31754]:  warning:
common_apply_stickiness: Forcing WebSite away from node2 after 1000000
failures (max=1)
Aug  5 08:53:45 vmx-occ-005 pengine[31754]:   notice: LogActions: Move
ClusterIP#011(Started node2 -> node1)
Aug  5 08:53:45 vmx-occ-005 pengine[31754]:   notice: process_pe_message:
Calculated Transition 114: /var/lib/pacemaker/pengine/pe-input-281.bz2




On Wed, Aug 5, 2015 at 1:05 PM, Tomas Jelinek <tojeline at redhat.com> wrote:

> Dne 5.8.2015 v 09:10 Vijay Partha napsal(a):
>
>> Cluster name: pacemaker1
>> Last updated: Wed Aug  5 09:07:27 2015
>> Last change: Wed Aug  5 08:58:24 2015
>> Stack: cman
>> Current DC: node1 - partition with quorum
>> Version: 1.1.11-97629de
>> 2 Nodes configured
>> 2 Resources configured
>>
>>
>> Online: [ node1 node2 ]
>>
>> Full list of resources:
>>
>>   ClusterIP    (ocf::heartbeat:IPaddr2):    Started node2
>>   WebSite    (ocf::heartbeat:apache):    Started node1
>>
>> Failed actions:
>>      WebSite_monitor_0 on node2 'unknown error' (1): call=96,
>> status=complete, last-rc-change='Wed Aug  5 08:53:24 2015', queued=1ms,
>> exec=51ms
>>
>
> Hi,
>
> Hard to say what the issues is without logs.
>
>
>>
>> Traceback (most recent call last):
>>    File "/usr/sbin/pcs", line 138, in <module>
>>      main(sys.argv[1:])
>>    File "/usr/sbin/pcs", line 127, in main
>>      status.status_cmd(argv)
>>    File "/usr/lib/python2.6/site-packages/pcs/status.py", line 13, in
>> status_cmd
>>      full_status()
>>    File "/usr/lib/python2.6/site-packages/pcs/status.py", line 60, in
>> full_status
>>      utils.serviceStatus("  ")
>>    File "/usr/lib/python2.6/site-packages/pcs/utils.py", line 1504, in
>> serviceStatus
>>      if is_systemctl():
>>    File "/usr/lib/python2.6/site-packages/pcs/utils.py", line 1476, in
>> is_systemctl
>>      elif re.search(r'Foobar Linux release 6\.', issue):
>> NameError: global name 'issue' is not defined
>>
>>
> It looks like you are hitting this bug:
> http://bugs.centos.org/view.php?id=7799
> Update of pcs is highly recommended.
>
> Regards,
> Tomas
>
>
>> This is the error that i got after location constraint and ClusterIP
>> started on node1 .
>>
>> On Wed, Aug 5, 2015 at 12:37 PM, Andrei Borzenkov <arvidjaar at gmail.com
>> <mailto:arvidjaar at gmail.com>> wrote:
>>
>>     On Wed, Aug 5, 2015 at 9:23 AM, Vijay Partha
>>     <vijaysarathy94 at gmail.com <mailto:vijaysarathy94 at gmail.com>> wrote:
>>     > Hi,
>>     >
>>     > I have 2 doubts.
>>     >
>>     > 1.) If i disable a resource and reboot the node, will the pacemaker
>> restart
>>     > the service?
>>
>>     What exactly "disable" means? There is no such operation in pacemaker.
>>
>>     >  Or how can i stop the service and after rebooting the service
>>     > should be started automatically by pacemaker
>>     >
>>
>>     Unfortunately pacemaker does not really provide any way to temporary
>>     stop resource. You can set target role to Stopped which will trigger
>>     resource stop. Then resource won't be started after reboot, because
>>     you told it to remain Stopped. Same applies to is-managed=false.
>>
>>     If I'm wrong and it is possible I would be very interested to learn
>> it.
>>
>>     > 2.) how to create apache instances in such a way that 1 instance
>> runs in 1
>>     > node and another instance runs on the second node.
>>     >
>>
>>     Just define two resources and set location constraints for each.
>>
>>     _______________________________________________
>>     Users mailing list: Users at clusterlabs.org <mailto:
>> Users at clusterlabs.org>
>>     http://clusterlabs.org/mailman/listinfo/users
>>
>>     Project Home: http://www.clusterlabs.org
>>     Getting started:
>> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>>     Bugs: http://bugs.clusterlabs.org
>>
>>
>>
>>
>> --
>> With Regards
>> P.Vijay
>>
>>
>> _______________________________________________
>> Users mailing list: Users at clusterlabs.org
>> http://clusterlabs.org/mailman/listinfo/users
>>
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>>
>>
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>



-- 
With Regards
P.Vijay
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20150805/b5ac3a04/attachment-0003.html>


More information about the Users mailing list