<html><head></head><body><div style="font-family: Verdana;font-size: 12.0px;"><div>
<div>I have tried to make this test, because I had the same problem.</div>

<div> </div>

<div>Origin:</div>

<div>One node cluster, node int2node1 running with IP address 10.16.242.231, quorum ignore, DC int2node1</div>

<div> </div>

<div>
<div>[root@int2node1 sysconfig]# crm_mon -1<br/>
============<br/>
Last updated: Wed Apr 24 09:49:32 2013<br/>
Last change: Wed Apr 24 09:44:55 2013 via crm_resource on int2node1<br/>
Stack: openais<br/>
Current DC: int2node1 - partition WITHOUT quorum<br/>
Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14<br/>
1 Nodes configured, 2 expected votes<br/>
1 Resources configured.<br/>
============</div>

<div>Online: [ int2node1 ]</div>

<div> Clone Set: cloneSysInfo [resSysInfo]<br/>
     Started: [ int2node1 ]</div>

<div> </div>

<div>Next step:</div>

<div>Node int2node2 with IP address 10.16.242.233 joins the cluster.</div>

<div> </div>

<div>Result:</div>

<div> </div>

<div>
<div>[root@int2node1 sysconfig]# crm_mon -1<br/>
============<br/>
Last updated: Wed Apr 24 10:14:18 2013<br/>
Last change: Wed Apr 24 10:05:20 2013 via crmd on int2node1<br/>
Stack: openais<br/>
Current DC: int2node1 - partition WITHOUT quorum<br/>
Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14<br/>
2 Nodes configured, 2 expected votes<br/>
1 Resources configured.<br/>
============</div>

<div>Online: [ int2node1 ]<br/>
OFFLINE: [ int2node2 ]</div>

<div> Clone Set: cloneSysInfo [resSysInfo]<br/>
     Started: [ int2node1 ]</div>

<div> </div>

<div>
<div>[root@int2node1 sysconfig]# corosync-objctl | grep member<br/>
runtime.totem.pg.mrp.srp.members.1743917066.ip=r(0) ip(10.16.242.231)<br/>
runtime.totem.pg.mrp.srp.members.1743917066.join_count=1<br/>
runtime.totem.pg.mrp.srp.members.1743917066.status=joined<br/>
runtime.totem.pg.mrp.srp.members.1777471498.ip=r(0) ip(10.16.242.233)<br/>
runtime.totem.pg.mrp.srp.members.1777471498.join_count=1<br/>
runtime.totem.pg.mrp.srp.members.1777471498.status=joined</div>

<div> </div>

<div>
<div>[root@int2node1 sysconfig]# crm_node -l<br/>
1743917066 int2node1 member</div>

<div> </div>

<div>
<div>[root@int2node2 ~]# crm_mon -1<br/>
Last updated: Wed Apr 24 11:27:39 2013<br/>
Last change: Wed Apr 24 10:07:45 2013 via crm_resource on int2node2<br/>
Stack: classic openais (with plugin)<br/>
Current DC: int2node2 - partition WITHOUT quorum<br/>
Version: 1.1.8-7.el6-394e906<br/>
2 Nodes configured, 2 expected votes<br/>
1 Resources configured.</div>

<div><br/>
Online: [ int2node2 ]<br/>
OFFLINE: [ int2node1 ]</div>

<div> Clone Set: cloneSysInfo [resSysInfo]<br/>
     Started: [ int2node2 ]</div>

<div> </div>

<div>
<div>[root@int2node2 ~]# corosync-objctl | grep member<br/>
runtime.totem.pg.mrp.srp.members.1743917066.ip=r(0) ip(10.16.242.231)<br/>
runtime.totem.pg.mrp.srp.members.1743917066.join_count=1<br/>
runtime.totem.pg.mrp.srp.members.1743917066.status=joined<br/>
runtime.totem.pg.mrp.srp.members.1777471498.ip=r(0) ip(10.16.242.233)<br/>
runtime.totem.pg.mrp.srp.members.1777471498.join_count=1<br/>
runtime.totem.pg.mrp.srp.members.1777471498.status=joined</div>

<div> </div>

<div>
<div>[root@int2node2 ~]# crm_node -l<br/>
1777471498 int2node2 member</div>

<div> </div>

<div>Pacemaker log of int2node2 with trace setting.</div>

<div>https://www.dropbox.com/s/04ciy2g6dfbauxy/pacemaker.log?n=165978094</div>

<div>On int2node1 (1.1.7) the trace setting did not create the pacemaker.log file.</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>

<div>
<div> </div>

<div>Below the excerpt of cib with node information from int2node2.</div>

<div>[root@int2node2 ~]# cibadmin -Q<br/>
<cib epoch="17" num_updates="51" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" update-origin="int2node2" update-client="crm_resource" cib-last-written="Wed Apr 24 10:07:45 2013" have-quorum="0" dc-uuid="int2node2"><br/>
  <configuration><br/>
    <crm_config><br/>
      <cluster_property_set id="cib-bootstrap-options"><br/>
      ...<br/>
      </cluster_property_set><br/>
    </crm_config><br/>
    <nodes><br/>
      <node id="int2node2" uname="int2node2"/><br/>
      <node id="int2node1" uname="int2node1"/><br/>
    </nodes><br/>
    <resources><br/>
    ...<br/>
    </resources><br/>
    <rsc_defaults><br/>
    ...<br/>
    </rsc_defaults><br/>
  </configuration><br/>
  <status><br/>
    <node_state id="int2node2" uname="int2node2" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member"><br/>
      <transient_attributes id="int2node2"><br/>
        <instance_attributes id="status-int2node2"><br/>
        ...<br/>
        </instance_attributes><br/>
      </transient_attributes><br/>
      <lrm id="int2node2"><br/>
        <lrm_resources><br/>
        ...<br/>
        </lrm_resources><br/>
      </lrm><br/>
    </node_state><br/>
    <node_state id="int2node1" uname="int2node1" in_ccm="true" crmd="online" join="down" crm-debug-origin="do_state_transition"/><br/>
  </status><br/>
</cib><br/>
 </div>

<div>On int2node2 the node state in the cib is different.</div>

<div>
<div>  <status><br/>
    <node_state id="int2node1" uname="int2node1" ha="active" in_ccm="true" crmd="online" join="member" expected="member" crm-debug-origin="do_state_transition" shutdown="0"><br/>
      <transient_attributes id="int2node1"></div>

<div>
<div>      </transient_attributes><br/>
      <lrm id="int2node1"><br/>
        <lrm_resources></div>

<div>
<div>        ...<br/>
        </lrm_resources><br/>
      </lrm><br/>
    </node_state><br/>
    <node_state id="int2node2" uname="int2node2" crmd="online" crm-debug-origin="do_state_transition" ha="active" in_ccm="true" join="pending"/><br/>
  </status></div>

<div> </div>
</div>
</div>
</div>

<div>Rainer</div>

<div name="quote" style="margin:10px 5px 5px 10px; padding: 10px 0 10px 10px; border-left:2px solid #C3D9E5; word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;">
<div style="margin:0 0 10px 0;"><b>Gesendet:</b> Mittwoch, 17. April 2013 um 07:32 Uhr<br/>
<b>Von:</b> "Andrew Beekhof" <andrew@beekhof.net><br/>
<b>An:</b> "The Pacemaker cluster resource manager" <pacemaker@oss.clusterlabs.org><br/>
<b>Betreff:</b> Re: [Pacemaker] 1.1.8 not compatible with 1.1.7?</div>

<div name="quoted-content"><br/>
On 15/04/2013, at 7:08 PM, Pavlos Parissis <pavlos.parissis@gmail.com> wrote:<br/>
<br/>
> Hoi,<br/>
><br/>
> I upgraded 1st node and here are the logs<br/>
> <a href="https://dl.dropboxusercontent.com/u/1773878/pacemaker-issue/node1.debuglog" target="_blank">https://dl.dropboxusercontent.com/u/1773878/pacemaker-issue/node1.debuglog</a><br/>
> <a href="https://dl.dropboxusercontent.com/u/1773878/pacemaker-issue/node2.debuglog" target="_blank">https://dl.dropboxusercontent.com/u/1773878/pacemaker-issue/node2.debuglog</a><br/>
><br/>
> Enabling tracing on the mentioned functions didn't give at least to me any more information.<br/>
<br/>
10:22:08 pacemakerd[53588]: notice: crm_add_logfile: Additional logging available in /var/log/pacemaker.log<br/>
<br/>
Thats the file(s) we need :)<br/>
<br/>
><br/>
> Cheers,<br/>
> Pavlos<br/>
><br/>
><br/>
> On 15 April 2013 01:42, Andrew Beekhof <andrew@beekhof.net> wrote:<br/>
><br/>
> On 15/04/2013, at 7:31 AM, Pavlos Parissis <pavlos.parissis@gmail.com> wrote:<br/>
><br/>
> > On 12/04/2013 09:37 μμ, Pavlos Parissis wrote:<br/>
> >> Hoi,<br/>
> >><br/>
> >> As I wrote to another post[1] I failed to upgrade to 1.1.8 for a 2 node<br/>
> >> cluster.<br/>
> >><br/>
> >> Before the upgrade process both nodes are using CentOS 6.3, corosync<br/>
> >> 1.4.1-7 and pacemaker-1.1.7.<br/>
> >><br/>
> >> I followed the rolling upgrade process, so I stopped pacemaker and then<br/>
> >> corosync on node1 and upgraded to CentOS 6.4. The OS upgrade upgrades<br/>
> >> also pacemaker to 1.1.8-7 and corosync to 1.4.1-15.<br/>
> >> The upgrade of rpms went smoothly as I knew about the crmsh issue so I<br/>
> >> made sure I had crmsh rpm on my repos.<br/>
> >><br/>
> >> Corosync started without any problems and both nodes could see each<br/>
> >> other[2]. But for some reason node2 failed to receive a reply on join<br/>
> >> offer from node1 and node1 never joined the cluster. Node1 formed a new<br/>
> >> cluster as it never got an reply from node2, so I ended up with a<br/>
> >> split-brain situation.<br/>
> >><br/>
> >> Logs of node1 can be found here<br/>
> >> <a href="https://dl.dropboxusercontent.com/u/1773878/pacemaker-issue/node1.log" target="_blank">https://dl.dropboxusercontent.com/u/1773878/pacemaker-issue/node1.log</a><br/>
> >> and of node2 here<br/>
> >> <a href="https://dl.dropboxusercontent.com/u/1773878/pacemaker-issue/node2.log" target="_blank">https://dl.dropboxusercontent.com/u/1773878/pacemaker-issue/node2.log</a><br/>
> >><br/>
> ><br/>
> > Doing a Disconnect & Reattach upgrade of both nodes at the same time<br/>
> > brings me a working 1.1.8 cluster. Any attempt to make a 1.1.8 node to<br/>
> > join a cluster with a 1.1.7 failed.<br/>
><br/>
> There wasn't enough detail in the logs to suggest a solution, but if you add the following to /etc/sysconfig/pacemaker and re-test, it might shed some additional light on the problem.<br/>
><br/>
> export PCMK_trace_functions=ais_dispatch_message<br/>
><br/>
> Certainly there was no intention to make them incompatible.<br/>
><br/>
> ><br/>
> > Cheers,<br/>
> > Pavlos<br/>
> ><br/>
> ><br/>
> > _______________________________________________<br/>
> > Pacemaker mailing list: Pacemaker@oss.clusterlabs.org<br/>
> > <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br/>
> ><br/>
> > Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br/>
> > Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br/>
> > Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br/>
><br/>
><br/>
> _______________________________________________<br/>
> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org<br/>
> <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br/>
><br/>
> Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br/>
> Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br/>
> Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br/>
><br/>
> _______________________________________________<br/>
> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org<br/>
> <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br/>
><br/>
> Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br/>
> Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br/>
> Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br/>
<br/>
<br/>
_______________________________________________<br/>
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org<br/>
<a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br/>
<br/>
Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br/>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br/>
Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a></div>
</div>
</div>
</div></div></body></html>