[Pacemaker] Pacemaker on CentOS 5.6 & CentOS 6.0

sdr(friedrich reichhart) fr at sdr.at
Tue Sep 6 13:39:20 UTC 2011


Hi Florian!

> -----Ursprüngliche Nachricht-----
> Von: Florian Haas [mailto:f.g.haas at gmx.net]
> Gesendet: Dienstag, 06. September 2011 08:38
> An: pacemaker at oss.clusterlabs.org
> Betreff: Re: [Pacemaker] Pacemaker on CentOS 5.6 & CentOS 6.0
> 
> On 09/03/11 09:42, sdr(friedrich reichhart) wrote:
> > Hi all!
> >
> > I'm working in a cluster with CentOS 5 & 6
> > Storage01 - Centos5
> > Sotrage02 - Centos5
> > KVM02 - Centos5
> >
> > This part works fine.
> >
> > KVM04 - Centos6
> 
> So this is meant to be a mixed CentOS 5/6 cluster? That's tricky to get
right; if
> this were a RHEL cluster this type of configuration would not be supported
by
> Red Hat.

Yes seems to be really tricky, but to have high availability I have to
upgrade node by node, or am I wrong?
Maybe, but we do not use RHEL cluster.

> 
> Are you making sure you are running the same Pacemaker major version
> (1.1) on all of these nodes?
> 
> > [root at kvm04 ~]# /etc/init.d/corosync start && /etc/init.d/pacemaker
start
> > Starting Corosync Cluster Engine (corosync):               [  OK  ]
> > Starting Pacemaker Cluster Manager:                        [  OK  ]
> >
> > Both "clusters" see the other node / nodes, meaning storage01,02 +
> > kvm02 sees node kvm02 and node kvm02 sees all other nodes.
> 
> What does corosync-cfgtool -s say?

On kvm04 - CentOs 6.0
[root at kvm04 service.d]# corosync-cfgtool -s
Printing ring status.
Local node ID 1745070272
RING ID 0
        id      = 192.168.3.104
        status  = ring 0 active with no faults

> 
> > [root at kvm04 ~]# crm_mon -1
> > ============
> > Last updated: Sat Sep  3 09:45:26 2011
> > Stack: openais
> > Current DC: kvm04.sdr.at - partition WITHOUT quorum
> > Version: 1.1.5-5.el6-01e86afaaa6d4a8c4836f68df80ababd6ca3902f
> > 4 Nodes configured, 2 expected votes
> > 0 Resources configured.
> > ============
> >
> > Online: [ kvm04.sdr.at ]
> > OFFLINE: [ kvm02.sdr.at inet-storage01.sdr.at inet-storage02.sdr.at ]
> 
> It looks as though you brought kvm04 online and it formed a membership of
> just one node, promoting itself to the DC. That would means it is now
using a
> CIB valid as per the Pacemaker 1.1 schema. If the other nodes are on 1.0,
then
> this would mean they are unable to join this cluster.

Yes, but all Nodes in Cluster are on 1.1 I hope...

> 
> > The OFFLINE nodes were not entried manually, pacemaker found them as
> > well as "on the other side".
> >
> > I'm using standard repositories + EPEL + Clusterlabs.
> >
> > What may be wrong?
> > Config files, auth-file, service.d might be configured ok.
> 
> "might be"?
> 
> It would be helpful if we could see your corosync.conf and files in
service.d. And
> also, your exact version information for pacemaker, corosync, and
cluster-glue.

@KVM04 - CentOS 6.0

After rm -f /var/lib/heartbeat/crm/* and after start

============
Last updated: Tue Sep  6 12:19:42 2011
Current DC: NONE
0 Nodes configured, unknown expected votes
0 Resources configured.
============

After 1 minute...

============
Last updated: Tue Sep  6 12:20:33 2011
Stack: openais
Current DC: kvm04.sdr.at - partition WITHOUT quorum
Version: 1.1.5-5.el6-01e86afaaa6d4a8c4836f68df80ababd6ca3902f
4 Nodes configured, 2 expected votes
0 Resources configured.
============

Online: [ kvm04.sdr.at ]
OFFLINE: [ kvm02.sdr.at inet-storage01.sdr.at inet-storage02.sdr.at ]


Installed Packages
cluster-glue.x86_64                                             1.0.5-2.el6
@base
corosync.x86_64                                                 1.2.3-36.el6
@scientific-linux
pacemaker.x86_64                                                1.1.5-5.el6
@scientific-linux

I also tried with the CentOs 6 original Packages as well

[root at kvm04 service.d]# corosync-cfgtool -s
Printing ring status.
Local node ID 1745070272
RING ID 0
        id      = 192.168.3.104
        status  = ring 0 active with no faults

[root at kvm04 service.d]# cat /etc/corosync/corosync.conf
# Please read the corosync.conf.5 manual page
compatibility: whitetank

totem {
        version: 2
        secauth: off
        threads: 0
        interface {
                ringnumber: 0
                bindnetaddr: 192.168.3.0
                mcastaddr: 226.94.1.1
                mcastport: 5405
        }
}

logging {
        fileline: off
        to_stderr: no
        to_logfile: no
        to_syslog: yes
        logfile: /var/log/cluster/corosync.log
        debug: off
        timestamp: on
        logger_subsys {
                subsys: AMF
                debug: off
        }
}

amf {
        mode: disabled
}

[root at kvm04 service.d]# cat /etc/corosync/service.d/pcmk
service {
        # Load the Pacemaker Cluster Resource Manager
        name: pacemaker
        ver:  1
}


@KVM02 - CentOs 5.5

[root at kvm02 ~]# crm_mon -1
============
Last updated: Tue Sep  6 15:26:24 2011
Stack: openais
Current DC: kvm02.sdr.at - partition with quorum
Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f
4 Nodes configured, 3 expected votes
17 Resources configured.
============

Online: [ inet-storage01.sdr.at inet-storage02.sdr.at kvm02.sdr.at ]
OFFLINE: [ kvm04.sdr.at ]


Installed Packages
cluster-glue.x86_64
1.0.6-1.6.el5                                                installed
corosync.i386
1.2.7-1.1.el5                                                installed
corosync.x86_64
1.2.7-1.1.el5                                                installed
pacemaker.i386
1.1.5-1.1.el5                                                installed
pacemaker.x86_64
1.1.5-1.1.el5                                                installed


[root at kvm02 ~]# corosync-cfgtool -s
Printing ring status.
Local node ID 1694738624
RING ID 0
        id      = 192.168.3.101
        status  = ring 0 active with no faults


[root at kvm02 ~]# cat /etc/corosync/corosync.conf
# Please read the corosync.conf.5 manual page
compatibility: whitetank

totem {
        version: 2
        secauth: off
        threads: 0
        interface {
                ringnumber: 0
                bindnetaddr: 192.168.3.0
                mcastaddr: 226.94.1.1
                mcastport: 5405
        }
}

logging {
        fileline: off
        to_stderr: no
        to_logfile: yes
        to_syslog: yes
        logfile: /var/log/cluster/corosync.log
        debug: off
        timestamp: on
        logger_subsys {
                subsys: AMF
                debug: off
        }
}

amf {
        mode: disabled
}

[root at kvm02 ~]# cat /etc/corosync/service.d/pcmk
service {
        # Load the Pacemaker Cluster Resource Manager
        name: pacemaker
        ver:  1
}

> 
> Cheers,
> Florian

Thanks for help,
Fritz.






More information about the Pacemaker mailing list