[ClusterLabs] design of a two-node cluster

Lentes, Bernd bernd.lentes at helmholtz-muenchen.de
Mon Dec 7 20:27:17 UTC 2015


Digimer wrote:
> 
> On 07/12/15 12:35 PM, Lentes, Bernd wrote:
> > Hi,
> >
> > i've been asking all around here a while ago. Unfortunately I couldn't
> > continue to work on my cluster, so I'm still thinking about the
design.
> > I hope you will help me again with some recommendations, because
> when
> > the cluster is running changing of the design is not possible anymore.
> >
> > These are my requirements:
> >
> > - all services are running inside virtual machines (KVM), mostly
> > databases and static/dynamic webpages
> 
> This is fine, it's what we do with our 2-node clusters.
> 
> > - I have two nodes and would like to have some vm's running on node
> A
> > and some on node B during normal operation as a kind of loadbalancing
> 
> I used to do this, but I've since stopped. The reasons are:
> 
> 1. You need to know that one node can host all servers and still perform
> properly. By always running on one node, you know that this is the case.
> Further, if one node ever stops being powerful enough, you will find out
> early and can address the issue immediately.
> 
> 2. If there is a problem, you can always be sure which node to terminate
> (ie: the node hosting all servers gets the fence delay, so the node
> without servers will always get fenced). If you lose input power, you
can
> quickly power down the backup node to shed load, etc.

Hi Digimer,
thanks for your reply.
I don't understand what you want to say in (2).

> 
> > - I'd like to keep the setup simple (if possible)
> 
> There is a minimum complexity in HA, but you can get as close as
> possible. We've spent years trying to simplify our VM hosting clusters
as
> much as possible.
> 
> > - availability is important, performance not so much (webpages some
> > hundred requests per day, databases some hundred inserts/selects
> per
> > day)
> 
> All the more reason to consolidate all VMs on one host.
> 
> > - I'd like to have snapshots of the vm's
> 
> This is never a good idea, as you catch the state of the disk at the
point of
> the snapshot, but not RAM. Anything in buffers will be missed so you can
> not rely on the snapshot images to always be consistent or even
> functional.
> 
> > - live migration of the vm's should be possible
> 
> Easy enough.
> 
> > - nodes are SLES 11 SP4, vm's are Windows 7 and severable linux
> > distributions (Ubuntu, SLES, OpenSuSE)
> 
> The OS installed on the guest VMs should not factor. As for the node OS,
> SUSE invests in making sure that HA works well so you should be fine.
> 
> > - setup should be extensible (add further vm's)
> 
> That is entirely a question of available hardware resources.
> 
> > - I have a shared storage (FC SAN)
> 
> Personally, I prefer DRBD (truly replicated storage), but SAN is fine.
> 
> > My ideas/questions:
> >
> > Should I install all vm's in one partition or every vm in a seperate
> > partition ? The advantage of one vm per partition is that I don't need
> > a cluster fs, right ?
> 
> I would put each VM on a dedicated LV and not have an FS between the
> VM and the host. The question then becomes; What is the PV? I use
> clustered LVM to make sure all nodes are in sync, LVM-wise.

Is this the setup you are running (without fs) ?

> 
> > I read to avoid a cluster fs if possible because it adds further
> > complexity. Below the fs I'd like to have logical volumes because they
> > are easy to expand.
> 
> Avoiding clustered FS is always preferable, yes. I use a small gfs2
> partition, but this is just for storing VM XML data, install media, etc.
> Things that change rarely. Some advocate for having independent FSes
> on each node and keeping the data in sync using things like rsync or
what
> have you.
> 
> > Do I need cLVM (I think so) ? Is it an advantage to install the vm's
> > in plain partitions, without a fs ?
> 
> I advise it, yes.
> 
> > It would reduce the complexity further because I don't need a fs.
> > Would live migration still be possible ?
> 
> Live migration is possible provided both nodes can see the same physical
> storage at the same time. For example, DRBD dual-primary works. If you
> use clustered LVM, you can be sure that the backing LVs are the same
> across the nodes.

And this works without a cluster fs ? But when both nodes accesses the LV
concurrently (during the migration), will the data not be destroyed ?
cLVM does not control concurrent access, it just cares about propagating
the lvm metadata to all nodes and locking during changes of the metadata.

> 
> > snapshots:
> > I was playing around with virsh (libvirt) to create snapshots of the
vm's.
> > In the end I gave up. virsh explains commands in its help, but when
> > you want to use them you get messages like "not supported yet",
> > although I use libvirt 1.2.11. This is ridiculous. I think I will
> > create my snapshots inside the vm's using lvm.
> > We have a network based backup solution (Legato/EMC) which saves
> the
> > disks every night.
> > Supplying a snapshot for that I have a consistent backup. The
> > databases are dumped with their respective tools.
> >
> > Thanks in advance.
> 
> I don't recommend snapshots, as I mentioned. Focus on your backup
> application and create DR VMs if you want to minimize the time to
> recovery after a total VM loss is what I recommend.
> 

What do you mean with DR ?

Bernd 
   

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Nikolaus Blum, Dr. Alfons Enhsen
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671





More information about the Users mailing list