[ClusterLabs] how to "switch on" cLVM ?
bernd.lentes at helmholtz-muenchen.de
Tue Jun 7 14:06:45 EDT 2016
----- On Jun 7, 2016, at 5:54 PM, Digimer lists at alteeve.ca wrote:
> DLM is just a distributed lock manager, that's it. LVM uses it to
> coordinate actions within the cluster, so what LVM does is still up to LVM.
> I'm not a dev, so I might get this a little wrong, but basically it
> works like this...
> You want to create an LV from a clustered VG. The dlm and clvmd daemons
> are running on all nodes. You type 'lvcreate ...' on node 1. Node 1 asks
> for a lock from DLM, DLM checks to make sure the lock your asking for
> doesn't conflict with any locks held elsewhere in the cluster and then
> it is granted.
What is locked ? The metadata (information about PV, VG, LV) ?
Or the access to the LV ?
> Now the local machine creates the LV (same that no one else is going to
> work on the same bits because of the DLM lock), releases the lock and
> informs the other nodes. The other nodes update their view of the VG and
> see the new LV.
> From the user's perspective, the LV they created on node 1 is instantly
> seen on the other nodes.
>> But maybe i want to have some vm's running on host A and others on host B.
>> Remember: one vm per LV.
>> So i need access to the VG concurrently from both nodes, right ?
>> But if the FS from the LV is a cluster ressource, pacemaker takes care the the
>> FS is mounted just from one node.
>> I can rely it on it, right ? That's what i read often.
>> But what if i don't have a FS ? It's possible to have vm's in plain partitions,
>> which should be a performance advantage.
> Clustered LVM doesn't care how an LV is used. It only cares that changes
> won't conflict (thanks to DLM) and that all nodes have the same view of
> the LVM. So deactivate, activate, grow, create, delete of PVs, VGs and
> LVs are always seen on all nodes right away.
> What you do on the LVs is up to you. If boot a VM on node 1 using an LV
> as backing storage, nothing in LVM stopping you from accessing that LV
> on another node and destroying your data.
That's logical. But the VM will be a ressource so the cluster manager
takes care that only one instance of the VM is running anytime.
Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Alfons Enhsen, Renate Schlusen (komm.)
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671
More information about the Users