<div dir="ltr"><div><div><div>service cman start<br>Starting cluster: <br> Checking if cluster has been disabled at boot... [ OK ]<br> Checking Network Manager... [ OK ]<br> Global setup... [ OK ]<br> Loading kernel modules... [ OK ]<br> Mounting configfs... [ OK ]<br> Starting cman... [ OK ]<br> Waiting for quorum... [ OK ]<br> Starting fenced... [ OK ]<br> Starting dlm_controld... [ OK ]<br> Tuning DLM kernel config... [ OK ]<br> Starting gfs_controld... [ OK ]<br> Unfencing self... [ OK ]<br> Joining fence domain... <br><br></div>It doesnt go beyond this. <br><br></div>This is my pacemaker log<br>Jul 29 13:38:32 [22563] vmx-occ-004 pacemakerd: info: crm_ipc_connect: Could not establish pacemakerd connection: Connection refused (111)<br>Jul 29 13:38:32 [22563] vmx-occ-004 pacemakerd: info: config_find_next: Processing additional service options...<br>Jul 29 13:38:32 [22563] vmx-occ-004 pacemakerd: info: get_config_opt: Found 'corosync_quorum' for option: name<br>Jul 29 13:38:32 [22563] vmx-occ-004 pacemakerd: info: config_find_next: Processing additional service options...<br>Jul 29 13:38:32 [22563] vmx-occ-004 pacemakerd: info: get_config_opt: Found 'corosync_cman' for option: name<br>Jul 29 13:38:32 [22563] vmx-occ-004 pacemakerd: info: config_find_next: Processing additional service options...<br>Jul 29 13:38:32 [22563] vmx-occ-004 pacemakerd: info: get_config_opt: Found 'openais_ckpt' for option: name<br>Jul 29 13:38:32 [22563] vmx-occ-004 pacemakerd: info: config_find_next: No additional configuration supplied for: service<br>Jul 29 13:38:32 [22563] vmx-occ-004 pacemakerd: info: config_find_next: Processing additional quorum options...<br>Jul 29 13:38:32 [22563] vmx-occ-004 pacemakerd: info: get_config_opt: Found 'quorum_cman' for option: provider<br>Jul 29 13:38:32 [22563] vmx-occ-004 pacemakerd: info: get_cluster_type: Detected an active 'cman' cluster<br>Jul 29 13:38:32 [22563] vmx-occ-004 pacemakerd: info: mcp_read_config: Reading configure for stack: cman<br>Jul 29 13:38:32 [22563] vmx-occ-004 pacemakerd: info: config_find_next: Processing additional logging options...<br>Jul 29 13:38:32 [22563] vmx-occ-004 pacemakerd: info: get_config_opt: Defaulting to 'off' for option: debug<br>Jul 29 13:38:32 [22563] vmx-occ-004 pacemakerd: info: get_config_opt: Found 'yes' for option: to_logfile<br>Jul 29 13:38:32 [22563] vmx-occ-004 pacemakerd: info: get_config_opt: Found '/var/log/cluster/corosync.log' for option: logfile<br>Jul 29 13:38:32 [22563] vmx-occ-004 pacemakerd: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log<br>Jul 29 13:38:32 [22563] vmx-occ-004 pacemakerd: info: get_config_opt: Found 'yes' for option: to_syslog<br>Jul 29 13:38:32 [22563] vmx-occ-004 pacemakerd: info: get_config_opt: Found 'local4' for option: syslog_facility<br>Jul 29 13:38:32 [22563] vmx-occ-004 pacemakerd: notice: main: Starting Pacemaker 1.1.11 (Build: 97629de): generated-manpages agent-manpages ascii-docs ncurses libqb-logging libqb-ipc nagios corosync-plugin cman acls<br>Jul 29 13:38:32 [22563] vmx-occ-004 pacemakerd: info: main: Maximum core file size is: 18446744073709551615<br>Jul 29 13:38:32 [22563] vmx-occ-004 pacemakerd: info: qb_ipcs_us_publish: server name: pacemakerd<br>Jul 29 13:46:17 [22563] vmx-occ-004 pacemakerd: error: cluster_connect_cpg: Could not join the CPG group 'pacemakerd': 6<br>Jul 29 13:46:17 [22563] vmx-occ-004 pacemakerd: error: main: Couldn't connect to Corosync's CPG service<br>Jul 29 13:46:17 [22563] vmx-occ-004 pacemakerd: info: crm_xml_cleanup: Cleaning up memory from libxml2<br><br></div>this is my message log<br><br>Aug 3 14:35:31 vmx-occ-004 dlm_controld[2383]: daemon cpg_join error retrying<br>Aug 3 14:35:33 vmx-occ-004 /usr/sbin/gmetad[4295]: data_thread() for [my cluster] failed to contact node 10.61.40.194<br>Aug 3 14:35:33 vmx-occ-004 /usr/sbin/gmetad[4295]: data_thread() got no answer from any [my cluster] datasource<br>Aug 3 14:35:34 vmx-occ-004 fenced[2359]: daemon cpg_join error retrying<br>Aug 3 14:35:37 vmx-occ-004 /usr/sbin/gmond[4590]: Error creating multicast server mcast_join=10.61.40.194 port=8649 mcast_if=NULL family='inet4'. Will try again...#012<br>Aug 3 14:35:38 vmx-occ-004 gfs_controld[2458]: daemon cpg_join error retrying<br>Aug 3 14:35:41 vmx-occ-004 dlm_controld[2383]: daemon cpg_join error retrying<br>Aug 3 14:35:44 vmx-occ-004 fenced[2359]: daemon cpg_join error retrying<br>Aug 3 14:35:48 vmx-occ-004 /usr/sbin/gmetad[4295]: data_thread() for [my cluster] failed to contact node 10.61.40.194<br>Aug 3 14:35:48 vmx-occ-004 /usr/sbin/gmetad[4295]: data_thread() got no answer from any [my cluster] datasource<br>Aug 3 14:35:48 vmx-occ-004 gfs_controld[2458]: daemon cpg_join error retrying<br>Aug 3 14:35:51 vmx-occ-004 dlm_controld[2383]: daemon cpg_join error retrying<br>Aug 3 14:35:54 vmx-occ-004 fenced[2359]: daemon cpg_join error retrying<br>Aug 3 14:35:58 vmx-occ-004 gfs_controld[2458]: daemon cpg_join error retrying<br>Aug 3 14:36:01 vmx-occ-004 dlm_controld[2383]: daemon cpg_join error retrying<br>Aug 3 14:36:03 vmx-occ-004 /usr/sbin/gmetad[4295]: data_thread() for [my cluster] failed to contact node 10.61.40.194<br>Aug 3 14:36:03 vmx-occ-004 /usr/sbin/gmetad[4295]: data_thread() got no answer from any [my cluster] datasource<br>Aug 3 14:36:04 vmx-occ-004 fenced[2359]: daemon cpg_join error retrying<br>Aug 3 14:36:08 vmx-occ-004 gfs_controld[2458]: daemon cpg_join error retrying<br>Aug 3 14:36:11 vmx-occ-004 dlm_controld[2383]: daemon cpg_join error retrying<br>Aug 3 14:36:14 vmx-occ-004 fenced[2359]: daemon cpg_join error retrying<br>Aug 3 14:36:18 vmx-occ-004 gfs_controld[2458]: daemon cpg_join error retrying<br>Aug 3 14:36:18 vmx-occ-004 /usr/sbin/gmetad[4295]: data_thread() for [my cluster] failed to contact node 10.61.40.194<br>Aug 3 14:36:18 vmx-occ-004 /usr/sbin/gmetad[4295]: data_thread() got no answer from any [my cluster] datasource<br>Aug 3 14:36:21 vmx-occ-004 dlm_controld[2383]: daemon cpg_join error retrying<br>Aug 3 14:36:24 vmx-occ-004 fenced[2359]: daemon cpg_join error retrying<br><br></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Aug 3, 2015 at 6:02 PM, emmanuel segura <span dir="ltr"><<a href="mailto:emi2fast@gmail.com" target="_blank">emi2fast@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Sorry, but I think is more easy to help you, If you provide more<br>
information about your problem.<br>
<div><div class="h5"><br>
2015-08-03 14:14 GMT+02:00 Vijay Partha <<a href="mailto:vijaysarathy94@gmail.com">vijaysarathy94@gmail.com</a>>:<br>
> Hi<br>
><br>
> When i start cman it hangs in joining fence domain.<br>
><br>
> this is my message log.<br>
><br>
><br>
> Aug 3 14:12:16 vmx-occ-005 dlm_controld[2112]: daemon cpg_join error<br>
> retrying<br>
> Aug 3 14:12:19 vmx-occ-005 gfs_controld[2191]: daemon cpg_join error<br>
> retrying<br>
> Aug 3 14:12:24 vmx-occ-005 fenced[2098]: daemon cpg_join error retrying<br>
> Aug 3 14:12:27 vmx-occ-005 dlm_controld[2112]: daemon cpg_join error<br>
> retrying<br>
> Aug 3 14:12:29 vmx-occ-005 gfs_controld[2191]: daemon cpg_join error<br>
> retrying<br>
> Aug 3 14:12:34 vmx-occ-005 fenced[2098]: daemon cpg_join error retrying<br>
> Aug 3 14:12:37 vmx-occ-005 dlm_controld[2112]: daemon cpg_join error<br>
> retrying<br>
> Aug 3 14:12:39 vmx-occ-005 gfs_controld[2191]: daemon cpg_join error<br>
> retrying<br>
> Aug 3 14:12:44 vmx-occ-005 fenced[2098]: daemon cpg_join error retrying<br>
> Aug 3 14:12:47 vmx-occ-005 dlm_controld[2112]: daemon cpg_join error<br>
> retrying<br>
> Aug 3 14:12:49 vmx-occ-005 gfs_controld[2191]: daemon cpg_join error<br>
> retrying<br>
> Aug 3 14:12:54 vmx-occ-005 fenced[2098]: daemon cpg_join error retrying<br>
><br>
> How to solve this issue?<br>
> --<br>
> With Regards<br>
> P.Vijay<br>
><br>
</div></div>> _______________________________________________<br>
> Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
> <a href="http://clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://clusterlabs.org/mailman/listinfo/users</a><br>
><br>
> Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
> Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
> Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
><br>
<br>
<br>
<br>
--<br>
.~.<br>
/V\<br>
// \\<br>
/( )\<br>
^`~'^<br>
<br>
_______________________________________________<br>
Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
<a href="http://clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://clusterlabs.org/mailman/listinfo/users</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
</blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature"><div dir="ltr"><div>With Regards<br></div>P.Vijay<br></div></div>
</div>