[ClusterLabs] CentOS 7 - Corosync configuration
Willi Fehler
willi.fehler at t-online.de
Fri Mar 6 09:42:15 CET 2015
Hi Michael,
I have only 2 nodes. I've already restart Corosync on both nodes.
Nothing changed.
[root at linsrv006 corosync]# uname -n
linsrv006.willi-net.local
[root at linsrv007 ~]# uname -n
linsrv007.willi-net.local
[root at linsrv006 corosync]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4
localhost4.localdomain4
::1 localhost localhost.localdomain localhost6
localhost6.localdomain6
10.10.10.1 linsrv006
10.10.10.2 linsrv007
[root at linsrv007 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4
localhost4.localdomain4
::1 localhost localhost.localdomain localhost6
localhost6.localdomain6
10.10.10.1 linsrv006
10.10.10.2 linsrv007
10.10.10.2 linsrv007
[root at linsrv006 corosync]# corosync-cfgtool -s
Printing ring status.
Local node ID 1
RING ID 0
id = 192.168.0.9
status = ring 0 active with no faults
RING ID 1
id = 10.10.10.1
status = ring 1 active with no faults
Regards - Willi
Am 06.03.15 um 09:36 schrieb Michael Schwartzkopff:
> Am Freitag, 6. März 2015, 09:24:25 schrieb Willi Fehler:
>> Hi,
>>
>> I'm trying to build a Pacemaker/Corosync Cluster on CentOS7. The default
>> Corosync configuration with one ring is working but then I only have 1
>> ring and no encryption.
>>
>> RING ID 1
>> id = 10.10.10.1
>> status = ring 1 active with no faults
>>
>> I've tried to activate the following configuration but it doesn't work.
>>
>> [root at linsrv006 corosync]# cat /etc/corosync/corosync.conf
>> totem {
>> version: 2
>> secauth: on
>> threads: 0
>> rrp_mode: active
>> interface {
>> ringnumber: 0
>> bindnetaddr: 192.168.0.0
>> mcastaddr: 226.94.42.7
>> mcastport: 5411
>> }
>> interface {
>> ringnumber: 1
>> bindnetaddr: 10.10.10.0
>> mcastaddr: 226.94.42.11
>> mcastport: 5419
>> }
>> token: 10000
>> token_retransmits_before_loss_const: 40
>> rrp_problem_count_timeout: 20000
>> nodeid: 3
>> }
>> quorum {
>> provider: corosync_votequorum
>> expected_votes: 3
>> }
>>
>> logging {
>> to_syslog: yes
>> }
>>
>>
>> The problem is also than, that pcs status is showing 3 nodes.
>>
>> [root at linsrv006 corosync]# pcs status
>> Cluster name:
>> Last updated: Fri Mar 6 09:10:41 2015
>> Last change: Fri Mar 6 09:10:30 2015 via crmd on linsrv006.willi-net.local
>> Stack: corosync
>> Current DC: NONE
>> 4 Nodes configured
>> 9 Resources configured
>>
>>
>> OFFLINE: [ linsrv006 linsrv006.willi-net.local linsrv006.willi-net.local
>> linsrv007 ]
> Corosync knows even 4 nodes.
> How many nodes do you have? What is their "uname -u"?
> What does corosync-objctl say?
> What happens if you stop corosync on all nodes and start it again?
>
>
> Mit freundlichen Grüßen,
>
> Michael Schwartzkopff
>
>
>
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://clusterlabs.org/pipermail/users/attachments/20150306/5e0a37ee/attachment.html>
More information about the Users
mailing list