[ClusterLabs] Issue in starting Pacemaker Virtual IP in RHEL 7
Somanath Jeeva
somanath.jeeva at ericsson.com
Mon Nov 6 05:43:42 EST 2017
Hi
I am using a two node pacemaker cluster with teaming enabled. The cluster has
1. Two team interfaces with different subents.
2. The team1 has a NFS VIP plumbed to it.
3. The VirtualIP from pacemaker is configured to plumb to team0(Corosync ring number is 0)
In this case the corosync takes the NFS IP as its ring address and checks the same in the corosync.conf. Since conf file has team0 hostname the corosync start fails.
Outputs:
$ip a output:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master team0 state UP qlen 1000
link/ether 38:63:bb:3f:a4:ac brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master team1 state UP qlen 1000
link/ether 38:63:bb:3f:a4:ad brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether 38:63:bb:3f:a4:ae brd ff:ff:ff:ff:ff:ff
5: eth3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000
link/ether 38:63:bb:3f:a4:af brd ff:ff:ff:ff:ff:ff
6: eth4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master team0 state UP qlen 1000
link/ether 36:f7:05:1f:b3:b1 brd ff:ff:ff:ff:ff:ff
7: eth5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master team1 state UP qlen 1000
link/ether 38:63:bb:3f:a4:ad brd ff:ff:ff:ff:ff:ff
8: eth6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether 28:80:23:a7:dd:fe brd ff:ff:ff:ff:ff:ff
9: eth7: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000
link/ether 28:80:23:a7:dd:ff brd ff:ff:ff:ff:ff:ff
10: team1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
link/ether 38:63:bb:3f:a4:ad brd ff:ff:ff:ff:ff:ff
inet 10.64.23.117/28 brd 10.64.23.127 scope global team1
valid_lft forever preferred_lft forever
inet 10.64.23.121/24 scope global secondary team1:~m0
valid_lft forever preferred_lft forever
inet6 fe80::3a63:bbff:fe3f:a4ad/64 scope link
valid_lft forever preferred_lft forever
11: team0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
link/ether 38:63:bb:3f:a4:ac brd ff:ff:ff:ff:ff:ff
inet 10.64.23.103/28 brd 10.64.23.111 scope global team0
valid_lft forever preferred_lft forever
inet6 fe80::3a63:bbff:fe3f:a4ac/64 scope link
valid_lft forever preferred_lft forever
Corosync Conf File:
cat /etc/corosync/corosync.conf
totem {
version: 2
secauth: off
cluster_name: DES
transport: udp
rrp_mode: passive
interface {
ringnumber: 0
bindnetaddr: 10.64.23.96
mcastaddr: 224.1.1.1
mcastport: 6860
}
}
nodelist {
node {
ring0_addr: dl380x4415
nodeid: 1
}
node {
ring0_addr: dl360x4405
nodeid: 2
}
}
quorum {
provider: corosync_votequorum
two_node: 1
}
logging {
to_logfile: yes
logfile: /var/log/cluster/corosync.log
to_syslog: yes
}
/etc/hosts:
$ cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.64.23.101 dl380x4416
10.64.23.104 dl380x4389
10.64.23.106 dl360x4387
10.64.23.103 dl380x4415
10.64.23.105 dl360x4405
10.64.23.115 dl380x4416-int
10.64.23.117 dl380x4415-int
10.64.23.119 dl360x4405-int
10.64.23.120 dl360x4387-int
10.64.23.118 dl380x4389-int
10.64.23.102 dl380x4414
Logs:
[3029] dl380x4415 corosyncerror [MAIN ] Corosync Cluster Engine exiting with status 20 at service.c:356.
[19040] dl380x4415 corosyncnotice [MAIN ] Corosync Cluster Engine ('2.4.0'): started and ready to provide service.
[19040] dl380x4415 corosyncinfo [MAIN ] Corosync built-in features: dbus systemd xmlconf qdevices qnetd snmp pie relro bindnow
[19040] dl380x4415 corosyncnotice [TOTEM ] Initializing transport (UDP/IP Multicast).
[19040] dl380x4415 corosyncnotice [TOTEM ] Initializing transmit/receive security (NSS) crypto: none hash: none
[19040] dl380x4415 corosyncnotice [TOTEM ] The network interface [10.64.23.121] is now up.
[19040] dl380x4415 corosyncnotice [SERV ] Service engine loaded: corosync configuration map access [0]
[19040] dl380x4415 corosyncinfo [QB ] server name: cmap
[19040] dl380x4415 corosyncnotice [SERV ] Service engine loaded: corosync configuration service [1]
[19040] dl380x4415 corosyncinfo [QB ] server name: cfg
[19040] dl380x4415 corosyncnotice [SERV ] Service engine loaded: corosync cluster closed process group service v1.01 [2]
[19040] dl380x4415 corosyncinfo [QB ] server name: cpg
[19040] dl380x4415 corosyncnotice [SERV ] Service engine loaded: corosync profile loading service [4]
[19040] dl380x4415 corosyncnotice [QUORUM] Using quorum provider corosync_votequorum
[19040] dl380x4415 corosynccrit [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
[19040] dl380x4415 corosyncerror [SERV ] Service engine 'corosync_quorum' failed to load for reason 'configuration error: nodelist or quorum.expected_votes must be configured!'
With Regards
Somanath Thilak J
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.clusterlabs.org/pipermail/users/attachments/20171106/7c117464/attachment-0002.html>
More information about the Users
mailing list