[ClusterLabs] principal questions to a two-node cluster

Michael Schwartzkopff ms at sys4.de
Mon Apr 20 17:21:20 UTC 2015


Am Montag, 20. April 2015, 19:12:01 schrieb Lentes, Bernd:
> Michael wrote:
> > Am Montag, 20. April 2015, 15:23:28 schrieb Lentes, Bernd:
> > > Hi,
> > > 
> > > we'd like to create a two-node cluster for our services (web,
> > > database, virtual machines). We will have two servers and a shared
> > 
> > fiberchannel SAN.
> > 
> > > What would you do e.g. with the content of the webpages we offer ?
> > 
> > Put
> > 
> > > them on the SAN so we don't need to synchronize them between the
> > 
> > two nodes ?
> > 
> > Yes. That seems to be a good idea.
> > 
> > > Also the database and the vm's on the SAN ? Which fs would you
> > > recommend for the SAN volumes ? OCFS2 ? Can I mount the same
> > 
> > volume on
> > 
> > > each node contemporarily ? Or do I have to use the ocfs2 as a resource
> > > managed by pacemaker, so that the volume is only mounted if it is
> > 
> > necessary ?
> > 
> > In your setup I'd avoid concurrent mounts of the columes on both
> > servers. If you have concurrent mounts, you will have to use a cluster
> > file
> > system (OCFS2, GFS, ...). These file systems provide locking. But if
> > pacemaker takes care, that the volumes are only mounted on one
> > machine, you can go with a plain file system (ext4, efx).
> 
> I thought ocfs2 would give me a further level of security. If, somehow,
> although pacemaker takes care, two hosts try to mount concurrently, with
> ocfs2 nothing would happen. Right ? Is there any reason not to use ocfs2 ?
> E.g. performance, stability ?

Yes. Some people wear suspenders and a belt ;-)

But seriously, OCFSs needs DLM. I'd try to avoid complex setups if possible. 
Make it as simple as possible. Complex setup tend to create the most weired 
errors.

> > if you need LVM, you anyway need LVM2 with Distributed Locking
> > (DLM).
> 
> Yes. I will not use LVM. But if I choose ocfs2, I also need DLM. Right ?
> Or is there an advantage of choosing LVM ? Snapshots ? OCFS2 also seems to
> be able to take snapshots.

No LVM2 cannot take snapshots. If you use LVM in clusters you need LVM2 
because it takes care for a consistent numbering of LVMs on all servers. 
otherwise You could run your LVMs only on the right or the left server. Not 
even a part here and a part there.

> > Please also consider NFSv4 if your SAN box offers it. NFS has file locking
> > included.
> 
> The SAN does not offer NFS.

Go, buy another SAN.

> > Please do not hesitate to mail to me or to the list, if there are any
> > other
> > problems.
> > 
> > For the databases, you also could consider using a Master/Slave setup.
> > So the data replication does happen on application level and no shared
> > filesystems are needed. pacemaker handles the state (Master / Slave) of
> > the database application. Otherwise the database would need share
> > storage.
> > 
> > Please note that you need fencing in ANY case if you have shared
> > storage.
> 
> Yes. I have HP ProLiant servers with ILO cards, and also a configureable
> (via LAN) power distributor from APC.

APCs have an advantage over ILO. ILO will not answer if the power is gone. So 
the cluster will not continue. That missing feedback is much more unlikely 
from with an APC.



> Bist Du der Autor von "Clusterbau: Hochverfügbarkeit mit Linux" ? Tolles
> Buch.

Ja. Danke. Eine positive Kritik bei Amazon würde mich auch freuen ;-)

Mit freundlichen Grüßen,

Michael Schwartzkopff

-- 
[*] sys4 AG

http://sys4.de, +49 (89) 30 90 46 64, +49 (162) 165 0044
Franziskanerstraße 15, 81669 München

Sitz der Gesellschaft: München, Amtsgericht München: HRB 199263
Vorstand: Patrick Ben Koetter, Marc Schiffbauer
Aufsichtsratsvorsitzender: Florian Kirstein
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 230 bytes
Desc: This is a digitally signed message part.
URL: <http://lists.clusterlabs.org/pipermail/users/attachments/20150420/abfd8252/attachment-0002.sig>


More information about the Users mailing list