[ClusterLabs] Ubuntu 16.04 - 2 node setup
james.booth at primarytec.co.uk
Wed Apr 12 15:55:03 EDT 2017
Apologies for burdening you with my issue, but I'm at my wits' end!
I'm trying to set up a 2-node cluster on two Ubuntu 16.04 VMs. I actually had this working earlier, but because I had tweaked a number of different settings (both corosync related and external settings), I reverted my VMs back to an earlier checkpoint to ensure I wasn't just running off a luck 'magic config' and I could replicate my setup... turns out, I can't!
The config for nodes is as follows:
This time around, when I first run corosync with `systemctl start corosync` it comes up using 127.0.0.1.
Apr 12 20:28:37 SWARM01 corosync: [TOTEM ] Initializing transmit/receive security (NSS
Apr 12 20:28:37 SWARM01 corosync: [TOTEM ] The network interface [127.0.0.1] is now up
Apr 12 20:28:37 SWARM01 corosync: [QB ] server name: cmap
Apr 12 20:28:37 SWARM01 corosync: [QB ] server name: cfg
Apr 12 20:28:37 SWARM01 corosync: [QB ] server name: cpg
Apr 12 20:28:37 SWARM01 corosync: [QB ] server name: votequorum
Apr 12 20:28:37 SWARM01 corosync: [QB ] server name: quorum
Apr 12 20:28:37 SWARM01 corosync: [TOTEM ] A new membership (127.0.0.1:4) was formed.
Results from `sudo corosync-quorumtool`
Date: Wed Apr 12 20:31:12 2017
Quorum provider: corosync_votequorum
Node ID: 2130706433
Ring ID: 4
Expected votes: 2
Highest expected: 2
Total votes: 1
Quorum: 2 Activity blocked
Nodeid Votes Name
2130706433 1 localhost (local)
And results from `sudo corosync-cmapctl | grep members`
runtime.totem.pg.mrp.srp.members.2130706433.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.2130706433.ip (str) = r(0) ip(127.0.0.1)
runtime.totem.pg.mrp.srp.members.2130706433.join_count (u32) = 1
runtime.totem.pg.mrp.srp.members.2130706433.status (str) = joined
It's also not using the correct node number (Should be 1 or 2 depending on which node I try it on). Then if I try to restart the service, it just fails and doesn't log anything in /var/log/corosync/corosync.log.
Apr 12 20:52:01 SWARM01 systemd: Starting Corosync Cluster Engine...
Apr 12 20:52:01 SWARM01 systemd: corosync.service: Main process exited, code=exited, status=8/n/a
Apr 12 20:52:01 SWARM01 systemd: Failed to start Corosync Cluster Engine.
Apr 12 20:52:01 SWARM01 systemd: corosync.service: Unit entered failed state.
Apr 12 20:52:01 SWARM01 systemd: corosync.service: Failed with result 'exit-code'.
The only way I can get to test this again is by completely removing corosync (apt remove --purge corosync), reinstalling and trying again. I've tried disabling the firewall completely to see if that was interfering, but to me, it's as if corosync isn't respecting my config file this time around?
Any guidance at all would be greatly appreciated!
Senior ICT Technician
Email: james.booth at primarytec.co.uk
Mobile: +44 07725817464
Registered in England. No 04760864. Registered office as above
The information transmitted is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is prohibited.
If you received this in error, please contact the sender and delete the material from any computer.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Users