[ClusterLabs] how to "switch on" cLVM ?
Sven Moeller
smoeller at nichthelfer.de
Tue Jun 7 10:28:16 UTC 2016
Hi,
Am Dienstag, 07. Juni 2016 11:42 CEST, "Lentes, Bernd" <bernd.lentes at helmholtz-muenchen.de> schrieb:
>
>
> ----- On Jun 6, 2016, at 8:17 PM, Digimer lists at alteeve.ca wrote:
>
> > On 06/06/16 01:13 PM, Lentes, Bernd wrote:
> >> Hi,
> >>
> >> i'm currently establishing a two node cluster. I have a FC SAN and two hosts. My
> >> services are integrated into virtual machines (kvm). The vm's should reside on
> >> the SAN.
> >> The hosts are connected to the SAN via FC HBA. Inside the hosts i see already
> >> the volume from the SAN. I'd like to store each vm in a dedicated logical
> >> volume (with or without filesystem in the LV).
> >> Hosts are SLES 11 SP4. LVM2 and LVM2-clvm is installed.
> >> How do i have to "switch on" clvm ? Is locking_type=3 in /etc/lvm/lvm.conf all
> >> which is necessary ?
> >
> > That tells LVM to use cluster locking, you still need to actually
> > provide the cluster locking, called DLM. With the cluster formed (and
> > fencing working!), you should be able to start the clvmd daemon.
> >
> >> In both nodes ?
> >
> > Yes
> >
> >> Restart of the init-scripts afterwards sufficient ?
> >
> > No, DLM needs to be added to the cluster and be running.
>
> Ok. Does DLM takes care that a LV just can be used on one host ?
> cLVM just takes care that the naming is the same on all nodes, right ?
AFAIK DLM takes care about the LVM Locking cluster wide.
>
> >
> >> And how do i have to proceed afterwards ?
> >> My idea is:
> >> 1. Create a PV
> >> 2. Create a VG
> >> 3. Create several LV's
> >
> > If the VG is created while dlm is running, it should automatically flag
> > the VG as clustered. If not, you will need to tell LVM that the VG is
> > clustered (-cy, iirc).
> >
> >> And because of clvm i only have to do that on one host and the other host sees
> >> all automatically ?
> >
> > Once clvmd is running, any changes made (lvcreate, delete, resize, etc)
> > will immediately appear on the other nodes.
> >
>
> Ok.
>
> >> Later on it's possible that some vm's run on host 1 and some on host 2. Does
> >> clvm needs to be a ressource managed by the cluster manager ?
> >
> > Yes, you can live-migrate as well. I do this all the time, except I use
> > DRBD instead of a SAN and RHEL instead of SUSE, but those are trivial
> > differences in this case.
> >
> >> If i use a fs inside the lv, a "normal" fs like ext3 is sufficient, i think. But
> >> it has to be a cluster ressource, right ?
> >
> > You can format a clustered LV with a cluster unaware filesystem just
> > fine. However, the FS is not made magically cluster aware... If you
> > mount it on two nodes, you will almost certainly corrupt the FS quickly.
> > If you want to mount an LV on two+ nodes at once, you need a
> > cluster-aware file system, life GFS2.
>
> No. Pacemaker takes care that the FS is just mounted on one node.
> So it should not be a problem ?
If you want to be sure to mount an LV on just one Node, you have to activate the VG exclusively on one node. You have to configure the ressource for the VG accordingly. Otherwise it's possible to activate and mount an LV on several nodes at the same time, even with a non Cluster FS, e.g. ext4, which would end up in corrupted FS, most likely. (as mentioned above allready).
Sven
>
> Bernd
>
>
> Helmholtz Zentrum Muenchen
> Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
> Ingolstaedter Landstr. 1
> 85764 Neuherberg
> www.helmholtz-muenchen.de
> Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
> Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Alfons Enhsen, Renate Schlusen (komm.)
> Registergericht: Amtsgericht Muenchen HRB 6466
> USt-IdNr: DE 129521671
>
>
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
More information about the Users
mailing list