[ClusterLabs] [Linux-HA] Cluster for HA VM's serving our local network
J. Echter
j.echter at echter-kuechen-elektro.de
Wed Sep 23 16:33:01 UTC 2015
Am 23.09.2015 um 16:48 schrieb Digimer:
> On 23/09/15 10:23 AM, J. Echter wrote:
>> Hi Digimer,
>>
>> Am 23.09.2015 um 15:38 schrieb Digimer:
>>> Hi Juergen,
>>>
>>> First; This list is deprecated and you should use the Cluster Labs -
>>> Users list (which I've cc'ed here).
>> i already got that reminder as i sent my message, and i subscribed :)
> I'm switching the thread to there then.
>
>>> Second; That tutorial is quite old and was replaced a while ago with
>>> this one: https://alteeve.ca/w/AN!Cluster_Tutorial_2. It has a lot of
>>> improvements we made after having many systems out in the field, so it
>>> is well worth re-doing your setup to match it. It's mostly the same, so
>>> it shouldn't be a big job.
>> i'll have a look over the new one.
> The main change, relative to this discussion, is more descriptive
> interface names.
ok, this can wait, for now. :)
>
>>> I'll address your comments in-line:
>>>
>>> On 23/09/15 08:38 AM, J. Echter wrote:
>>>> Hi,
>>>>
>>>> i was using this guide
>>>> https://alteeve.ca/w/2-Node_Red_Hat_KVM_Cluster_Tutorial_-_Archive to
>>>> set up my cluster for some services, all works pretty good.
>>>>
>>>> I decided to use this cluster as a HA vm provider for my network.
>>>>
>>>> I have a little, maybe silly, question.
>>>>
>>>> The guide tells me to disable qemu default network, like this:
>>>>
>>>>> Disable the 'qemu' Bridge
>>>>>
>>>>> By default, libvirtd <https://alteeve.ca/w/Libvirtd> creates a bridge
>>>>> called virbr0 designed to connect virtual machines to the first eth0
>>>>> interface. Our system will not need this, so we will remove it now.
>>>>>
>>>>> If libvirtd has started, skip to the next step. If you haven't started
>>>>> libvirtd yet, you can manually disable the bridge by blanking out the
>>>>> config file.
>>>>>
>>>>> cat /dev/null>/etc/libvirt/qemu/networks/default.xml
>>>> i skipped the step to create the bridge device, as it was not needed for
>>>> my belongings.
>>> OK.
>>>
>>>>> vim /etc/sysconfig/network-scripts/ifcfg-vbr2
>>>>> # Internet-Facing Network - Bridge
>>>>> DEVICE="vbr2"
>>>>> TYPE="Bridge"
>>>>> BOOTPROTO="static"
>>>>> IPADDR="10.255.0.1"
>>>>> NETMASK="255.255.0.0"
>>>>> GATEWAY="10.255.255.254"
>>>>> DNS1="8.8.8.8"
>>>>> DNS2="8.8.4.4"
>>>>> DEFROUTE="yes"
>>>> Now i want to know how to proceed?
>>>>
>>>> i have bond0 - connected to my network (both nodes got different ip's
>>>> from my dhcp)
>>>> bond1 & bond2 are used for corosync and drbd.
>>>>
>>>> what would be the best decision to have some vm's served from this
>>>> 2-node cluster too?
>>> From a bridging perspective, the quoted example config above is good.
>>> The default libvirtd bridge is a NAT'ed bridge, so your VMs would get
>>> IPs in the 192.168.122.0/24 subnet, and the libvirtd bridge would route
>>> them to the outside world. Using the bridge type in the tutorial though,
>>> your VMs would appear to be directly on your network and would get (or
>>> you would assign) IPs just the same as the rest of your system.
>> so i can just use this example on my setup?
>>
>> bond0 = LAN = 192.168.0.0/24
> This is the BCN, and is usually on 10.20.0.0/16
maybe i messed your tut a bit on my system :)
thats how i did it:
192.168.0.200 cluster (virtual ip)
192.168.0.205 mule (node1) (bond0)
192.168.0.211 bacula (node2) (bond0)
# mule
10.20.0.1 mule.bcn (bond1)
10.20.1.1 mule.ipmi
10.10.0.1 mule.sn (bond2)
# bacula
10.20.0.2 bacula.bcn (bond1)
10.20.1.2 bacula.ipmi
10.10.0.2 bacula.sn (bond2)
>> bridge = 10.255.0.1
> The bridge is on the IFN, which in the tutorial is on 10.255.0.0/16, so
> yes. Note that the IP assigned to the bridge has no bearing at all on
> the IPs set in the VMs.
>
>> can i use my own dns server, working on the lan?
>>
>> like this:
>>
>> DEVICE="vbr2"
>> TYPE="Bridge"
>> BOOTPROTO="static"
>> IPADDR="10.255.0.1"
>> NETMASK="255.255.0.0"
>> GATEWAY="10.255.255.254"
>> DNS1="192.168.0.1"
>> DEFROUTE="yes"
> Sure. With this style of bridging, it's like you VMs are plugged
> directly into the physical switch. What you do on the node has no
> bearing. The only thing is that you move the IP assignment for the node
> out of the bond and into the bridge. In fact, you can assign no IP to
> the bridge and traffic from the VMs will route fine.
>
> So think of this bridge as being like a regular hardware switch that the
> VMs plug into and that the node itself plugs into, and the bond as the
> "cable" linking the vritual switch to the hardware switch. When you
> think of it like that, you can see how the setup of the node has no
> bearing on anything else.
ok, then i could use any free ip i like, as long i don't use it
elsewhere in my lan (openvpn, etc)
so i could use the above example as it is and it should work :)
still have to get some more network stuff into my head ;)
>
>>>> thanks, and please tell me what infos i may have forgotten to provide
>>>> for you. :)
>>>>
>>>> cheers
>>>>
>>>> juergen
>> thanks for your support.
>>
>> cheers
>> _______________________________________________
>> Linux-HA mailing list is closing down.
>> Please subscribe to users at clusterlabs.org instead.
>> http://clusterlabs.org/mailman/listinfo/users
>> _______________________________________________
>> Linux-HA at lists.linux-ha.org
>> http://lists.linux-ha.org/mailman/listinfo/linux-ha
again, lots of thanks for your help.
cheers.
More information about the Users
mailing list