Greetings,<br><br>I had a running corosync/pacemaker two node configuration doing simple filesystem failover and stonith fencing. I decided to update to 1.1.8-4 after seeing some odd behavior and someone suggested a bug in 1.1.7-6 not playing nice with corosync 1.4.1-7. After updating my cluster will not initialize. The first node starts fine, corosync and pacemaker start via init scripts and resources show in good status. Rings are in good status as well. When I start corosync and pacemaker on the second node corosync starts fine but pacemaker fails to start. /var/log/messages reads:<br>
<br>corosync[3344]: [MAIN ] Corosync Cluster Engine ('1.4.1'): started and ready to provide service.<br>corosync[3344]: [MAIN ] Corosync built-in features: nss dbus rdma snmp<br>corosync[3344]: [MAIN ] Successfully read main configuration file '/etc/corosync/corosync.conf'.<br>
corosync[3344]: [TOTEM ] Initializing transport (UDP/IP Multicast).<br>corosync[3344]: [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).<br>corosync[3344]: [TOTEM ] The network interface [10.0.0.102] is now up.<br>
corosync[3344]: [SERV ] Service engine loaded: corosync extended virtual synchrony service<br>corosync[3344]: [SERV ] Service engine loaded: corosync configuration service<br>corosync[3344]: [SERV ] Service engine loaded: corosync cluster closed process group service v1.01<br>
corosync[3344]: [SERV ] Service engine loaded: corosync cluster config database access v1.01<br>corosync[3344]: [SERV ] Service engine loaded: corosync profile loading service<br>corosync[3344]: [SERV ] Service engine loaded: corosync cluster quorum service v0.1<br>
corosync[3344]: [MAIN ] Compatibility mode set to whitetank. Using V1 and V2 of the synchronization engine.<br>corosync[3344]: [TOTEM ] A processor joined or left the membership and a new membership was formed.<br>corosync[3344]: [CPG ] chosen downlist: sender r(0) ip(10.0.0.102) ; members(old:0 left:0)<br>
corosync[3344]: [MAIN ] Completed service synchronization, ready to provide service.<br>corosync[3344]: [TOTEM ] A processor joined or left the membership and a new membership was formed.<br>corosync[3344]: [CPG ] chosen downlist: sender r(0) ip(10.0.0.101) ; members(old:1 left:0)<br>
corosync[3344]: [MAIN ] Completed service synchronization, ready to provide service.<br><b>pacemakerd[3364]: notice: get_cluster_type: This installation of Pacemaker does not support the 'heartbeat' cluster infrastructure. Terminating.</b><br>
cibadmin[3376]: notice: crm_log_args: Invoked: cibadmin -l -Q <br>corosync[3344]: [SERV ] Unloading all Corosync service engines.<br><br>I have no heartbeat rpms installed on the nodes in the cluster. I don't know why pacemaker is deciding the configuration is 'heartbeat' instead of corosync.<br>
<br>I built the pacemaker rpms from the clusterlabs git tree. I used the same combination of with/without options that 1.1.7-6 was compiled with. I verified this by running a rpm rebuild on the pacemaker 1.1.7-6 source and looked at the output to determine the with/without options.<br>
<br>I have checked networking between the nodes and the function of the multicast address as well. 1.1.7-6 works, 1.1.8-4 does not.<br><br>OS/Software info:<br><br><div style="margin-left:40px">RHEL6.2 x86_64<br>corosync 1.4.1-7<br>
cluster-glue 1.0.5-6<br>clusterlib 3.0.12<br>libqb 0.14.2-1<br>pacemaker 1.1.8-4<br>openais 1.1.1-7<br></div><br>corosync.conf (if needed)<br><br><div style="margin-left:40px"># Please read the corosync.conf.5 manual page<br>
compatibility: whitetank<br><br>totem {<br> version: 2<br> secauth: off<br> threads: 0<br> interface {<br> ringnumber: 0<br> bindnetaddr: 10.0.0.0<br> mcastaddr: 226.94.1.1<br> mcastport: 5405<br>
ttl: 1<br> }<br>}<br><br>logging {<br> fileline: off<br> to_stderr: no<br> to_logfile: yes<br> to_syslog: yes<br> logfile: /var/log/cluster/corosync.log<br> debug: off<br> timestamp: on<br>
logger_subsys {<br> subsys: AMF<br> debug: off<br> }<br>}<br><br>aisexec {<br> user: root<br> group: root<br>}<br><br>amf {<br> mode: disabled<br>}<br></div><br><br>Any help or suggestions are greatly appreciated!<br>
<br>--Jeff<br><br>