[ClusterLabs] need some help with failing resources

Kostiantyn Ponomarenko konstantin.ponomarenko at gmail.com
Sat Dec 3 01:25:42 EST 2016


I assume that you are using crmsh.
If so, I suggest to post an output from "crm configure show" command here.

Thank you,
Kostia

On Sat, Dec 3, 2016 at 5:54 AM, Darko Gavrilovic <darko at chass.utoronto.ca>
wrote:

> Hello, I have a two node cluster running that seems to be failing to start
> resources.
>
>  Resource Group: services
>      svc-mysql  (ocf::heartbeat:mysql) Stopped
>      svc-httpd  (ocf::heartbeat:apache) Stopped
>      svc-ssh    (lsb:sshd-virt) Stopped
>      svc-tomcat6        (lsb:tomcat6) Stopped
>      svc-plone  (lsb:plone) Stopped
>      svc-bacula (lsb:bacula-fd-virt) Stopped
>
> When I run crm resource start services the service group does not start.
>
> I also tried starting the first resource in the group.
> crm resource start svc-mysql
>
> It does not start either.
>
> The error I am seeing is:
> Dec  2 21:59:43  pengine: [25829]: WARN: native_color: Resource svc-mysql
> cannot run anywhere
> Dec  2 22:00:26  pengine: [25829]: WARN: native_color: Resource svc-mysql
> cannot run anywhere
>
> 4b4f-a239-8a10dad40587, cib=0.3857.2) : Resource op removal
> Dec  2 21:59:32 server1 crmd: [25830]: info: te_rsc_command: Initiating
> action 55: monitor svc-mysql_monitor_0 on kurt.chass.utoronto.ca (local)
> Dec  2 21:59:32 server1 crmd: [25830]: info: do_lrm_rsc_op: Performing
> key=55:14:7:aee06ee3-9576-4b4f-a239-8a10dad40587 op=svc-mysql_monitor_0 )
> Dec  2 21:59:32 server1 crmd: [25830]: info: process_lrm_event: LRM
> operation svc-mysql_monitor_0 (call=163, rc=7, cib-update=249,
> confirmed=true) not running
> Dec  2 21:59:32 server1 crmd: [25830]: info: match_graph_event: Action
> svc-mysql_monitor_0 (55) confirmed on kurt.chass.utoronto.ca (rc=0)
> Dec  2 21:59:32 server1 crmd: [25830]: info: abort_transition_graph:
> te_update_diff:267 - Triggered transition abort (complete=1,
> tag=lrm_rsc_op, id=svc-mysql_monitor_0, magic=0:7;71:5:7:aee06ee3-9576-4b4f-a239-8a10dad40587,
> cib=0.3858.3) : Resource op removal
> Dec  2 21:59:33 server1 crmd: [25830]: info: te_rsc_command: Initiating
> action 56: monitor svc-mysql_monitor_0 on server2
> Dec  2 21:59:33 server1 crmd: [25830]: WARN: status_from_rc: Action 56
> (svc-mysql_monitor_0) on server2 failed (target: 7 vs. rc: 0): Error
> Dec  2 21:59:33 server1 crmd: [25830]: info: abort_transition_graph:
> match_graph_event:272 - Triggered transition abort (complete=0,
> tag=lrm_rsc_op, id=svc-mysql_monitor_0, magic=0:0;56:15:7:aee06ee3-9576-4b4f-a239-8a10dad40587,
> cib=0.3859.2) : Event failed
> Dec  2 21:59:33 server1 crmd: [25830]: info: match_graph_event: Action
> svc-mysql_monitor_0 (56) confirmed on server2 (rc=4)
> Dec  2 21:59:33 server1 crmd: [25830]: info: te_rsc_command: Initiating
> action 187: stop svc-mysql_stop_0 on server2
> Dec  2 21:59:35 server1 crmd: [25830]: info: match_graph_event: Action
> svc-mysql_stop_0 (187) confirmed on server2 (rc=0)
> Dec  2 22:10:20 server1 crmd: [19708]: info: do_lrm_rsc_op: Performing
> key=101:1:7:6e477ca6-4ffe-4e89-82c2-c6149d528128 op=svc-mysql_monitor_0 )
> Dec  2 22:10:20 server1 crmd: [19708]: info: process_lrm_event: LRM
> operation svc-mysql_monitor_0 (call=51, rc=7, cib-update=42,
> confirmed=true) not running
> Dec  2 22:12:24 server1 crmd: [19708]: info: te_rsc_command: Initiating
> action 102: monitor svc-mysql_monitor_0 on server2
> Dec  2 22:12:24 server1 crmd: [19708]: info: match_graph_event: Action
> svc-mysql_monitor_0 (102) confirmed on server2 (rc=0)
>
>
> Any advice on how to tackle this?
>
> dg
>
>
> _______________________________________________
> Users mailing list: Users at oss.clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20161203/fe927b17/attachment-0003.html>


More information about the Users mailing list