[ClusterLabs] [Linux-HA] Cluster for HA VM's serving our local network

Digimer lists at alteeve.ca
Wed Sep 23 17:26:20 UTC 2015


On 23/09/15 12:33 PM, J. Echter wrote:
> Am 23.09.2015 um 16:48 schrieb Digimer:
>> On 23/09/15 10:23 AM, J. Echter wrote:
>>> Hi Digimer,
>>>
>>> Am 23.09.2015 um 15:38 schrieb Digimer:
>>>> Hi Juergen,
>>>>
>>>>    First; This list is deprecated and you should use the Cluster Labs -
>>>> Users list (which I've cc'ed here).
>>> i already got that reminder as i sent my message, and i subscribed :)
>> I'm switching the thread to there then.
>>
>>>>    Second; That tutorial is quite old and was replaced a while ago with
>>>> this one: https://alteeve.ca/w/AN!Cluster_Tutorial_2. It has a lot of
>>>> improvements we made after having many systems out in the field, so it
>>>> is well worth re-doing your setup to match it. It's mostly the same, so
>>>> it shouldn't be a big job.
>>> i'll have a look over the new one.
>> The main change, relative to this discussion, is more descriptive
>> interface names.
> 
> ok, this can wait, for now. :)

Yup. there are a few other things that catch corner-case issue better as
well. However, if you want to wait a bit, soon (few months) there will
be an installable ISO that almost fully automates the install process
and adds a whole pile of new features. :)

>>>>    I'll address your comments in-line:
>>>>
>>>> On 23/09/15 08:38 AM, J. Echter wrote:
>>>>> Hi,
>>>>>
>>>>> i was using this guide
>>>>> https://alteeve.ca/w/2-Node_Red_Hat_KVM_Cluster_Tutorial_-_Archive to
>>>>> set up my cluster for some services, all works pretty good.
>>>>>
>>>>> I decided to use this cluster as a HA vm provider for my network.
>>>>>
>>>>> I have a little, maybe silly, question.
>>>>>
>>>>> The guide tells me to disable qemu default network, like this:
>>>>>
>>>>>>        Disable the 'qemu' Bridge
>>>>>>
>>>>>> By default, libvirtd <https://alteeve.ca/w/Libvirtd> creates a bridge
>>>>>> called virbr0 designed to connect virtual machines to the first eth0
>>>>>> interface. Our system will not need this, so we will remove it now.
>>>>>>
>>>>>> If libvirtd has started, skip to the next step. If you haven't started
>>>>>> libvirtd yet, you can manually disable the bridge by blanking out the
>>>>>> config file.
>>>>>>
>>>>>> cat  /dev/null>/etc/libvirt/qemu/networks/default.xml
>>>>> i skipped the step to create the bridge device, as it was not needed for
>>>>> my belongings.
>>>> OK.
>>>>
>>>>>> vim  /etc/sysconfig/network-scripts/ifcfg-vbr2
>>>>>> # Internet-Facing Network - Bridge
>>>>>> DEVICE="vbr2"
>>>>>> TYPE="Bridge"
>>>>>> BOOTPROTO="static"
>>>>>> IPADDR="10.255.0.1"
>>>>>> NETMASK="255.255.0.0"
>>>>>> GATEWAY="10.255.255.254"
>>>>>> DNS1="8.8.8.8"
>>>>>> DNS2="8.8.4.4"
>>>>>> DEFROUTE="yes"
>>>>> Now i want to know how to proceed?
>>>>>
>>>>> i have bond0 - connected to my network (both nodes got different ip's
>>>>> from my dhcp)
>>>>> bond1 & bond2 are used for corosync and drbd.
>>>>>
>>>>> what would be the best decision to have some vm's served from this
>>>>> 2-node cluster too?
>>>>  From a bridging perspective, the quoted example config above is good.
>>>> The default libvirtd bridge is a NAT'ed bridge, so your VMs would get
>>>> IPs in the 192.168.122.0/24 subnet, and the libvirtd bridge would route
>>>> them to the outside world. Using the bridge type in the tutorial though,
>>>> your VMs would appear to be directly on your network and would get (or
>>>> you would assign) IPs just the same as the rest of your system.
>>> so i can just use this example on my setup?
>>>
>>> bond0 = LAN = 192.168.0.0/24
>> This is the BCN, and is usually on 10.20.0.0/16
> 
> maybe i messed your tut a bit on my system :)
> 
> thats how i did it:
> 
> 192.168.0.200   cluster (virtual ip)
> 192.168.0.205   mule (node1) (bond0)
> 192.168.0.211   bacula (node2) (bond0)

So 'mule' and 'bacula' are the node names?

Generally speaking, the BCN and SN subnets stay the same as the
tutorial, so long as they don't conflict with your existing network
configuration. These should be isolated from you existing network,
either via VLANs or by using physically different switches.

The IFN is your existing network, which appears to be 192.168.0.0/24 (or
/16). So anywhere in the tutorial where you saw '10.255', you would
replace with your IPs.

The BCN (10.20/16) was on eth0 + eth3 -> bond0  (no bridge)
The SN  (10.10/16) was on eth1 + eth4 -> bond1  (no bridge)
The IFN (192.168.0/24) was on eth2 + eth5 -> bond2 -> vbr2.

So your bridge should be vbr2 on bond2 with your public/intranet IP
assigned. If you will want to make sure that the hostname you enter into
/etc/cluster/cluster.conf resolves to the 10.20/16 BCN network, as that
is how corosync decides which interface to use for network traffic.

> # mule
> 10.20.0.1       mule.bcn (bond1)
> 10.20.1.1       mule.ipmi
> 10.10.0.1       mule.sn (bond2)
> 
> # bacula
> 10.20.0.2       bacula.bcn (bond1)
> 10.20.1.2       bacula.ipmi
> 10.10.0.2       bacula.sn (bond2)
> 
>>> bridge = 10.255.0.1
>> The bridge is on the IFN, which in the tutorial is on 10.255.0.0/16, so
>> yes. Note that the IP assigned to the bridge has no bearing at all on
>> the IPs set in the VMs.
>>
>>> can i use my own dns server, working on the lan?
>>>
>>> like this:
>>>
>>> DEVICE="vbr2"
>>> TYPE="Bridge"
>>> BOOTPROTO="static"
>>> IPADDR="10.255.0.1"
>>> NETMASK="255.255.0.0"
>>> GATEWAY="10.255.255.254"
>>> DNS1="192.168.0.1"
>>> DEFROUTE="yes"
>> Sure. With this style of bridging, it's like you VMs are plugged
>> directly into the physical switch. What you do on the node has no
>> bearing. The only thing is that you move the IP assignment for the node
>> out of the bond and into the bridge. In fact, you can assign no IP to
>> the bridge and traffic from the VMs will route fine.
>>
>> So think of this bridge as being like a regular hardware switch that the
>> VMs plug into and that the node itself plugs into, and the bond as the
>> "cable" linking the vritual switch to the hardware switch. When you
>> think of it like that, you can see how the setup of the node has no
>> bearing on anything else.
> 
> ok, then i could use any free ip i like, as long i don't use it
> elsewhere in my lan (openvpn, etc)

Yup.

> so i could use the above example as it is and it should work :)

Up to you and your network. Please note my comment on the hostname /
cluster.conf.

> still have to get some more network stuff into my head ;)

This is why the tutorial starts with a warning to be patient. There is
nothing hard about HA, but there are a lot of pieces and it takes a
while to get it all sorted out in your head.

>>>>> thanks, and please tell me what infos i may have forgotten to provide
>>>>> for you. :)
>>>>>
>>>>> cheers
>>>>>
>>>>> juergen
>>> thanks for your support.
>>>
>>> cheers
>>> _______________________________________________
>>> Linux-HA mailing list is closing down.
>>> Please subscribe to users at clusterlabs.org instead.
>>> http://clusterlabs.org/mailman/listinfo/users
>>> _______________________________________________
>>> Linux-HA at lists.linux-ha.org
>>> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> 
> again, lots of thanks for your help.
> 
> cheers.

Happy to. Keep asking questions and I will answer as best I can.

-- 
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?




More information about the Users mailing list