[ClusterLabs] [pacemaker+ clvm] Cluster lvm must be active exclusively to create snapshot
su liu
liusu8788 at gmail.com
Tue Dec 6 07:24:50 CET 2016
It is the resource configration whthin my pacemaker cluster:
[root at controller ~]# cibadmin --query --scope resources
<resources>
<clone id="dlm-clone">
<primitive class="ocf" id="dlm" provider="pacemaker" type="controld">
<instance_attributes id="dlm-instance_attributes">
<nvpair id="dlm-instance_attributes-allow_stonith_disabled"
name="allow_stonith_disabled" value="true"/>
</instance_attributes>
<operations>
<op id="dlm-start-interval-0s" interval="0s" name="start"
timeout="90"/>
<op id="dlm-stop-interval-0s" interval="0s" name="stop"
timeout="100"/>
<op id="dlm-monitor-interval-30s" interval="30s" name="monitor"/>
</operations>
</primitive>
<meta_attributes id="dlm-clone-meta_attributes">
<nvpair id="dlm-interleave" name="interleave" value="true"/>
<nvpair id="dlm-ordered" name="ordered" value="true"/>
</meta_attributes>
</clone>
<clone id="clvmd-clone">
<primitive class="ocf" id="clvmd" provider="heartbeat" type="clvm">
<instance_attributes id="clvmd-instance_attributes">
<nvpair id="clvmd-instance_attributes-activate_vgs"
name="activate_vgs" value="true"/>
</instance_attributes>
<operations>
<op id="clvmd-start-interval-0s" interval="0s" name="start"
timeout="90"/>
<op id="clvmd-stop-interval-0s" interval="0s" name="stop"
timeout="90"/>
<op id="clvmd-monitor-interval-30s" interval="30s" name="monitor"/>
</operations>
<meta_attributes id="clvmd-meta_attributes"/>
</primitive>
<meta_attributes id="clvmd-clone-meta_attributes">
<nvpair id="clvmd-interleave" name="interleave" value="true"/>
<nvpair id="clvmd-ordered" name="ordered" value="true"/>
</meta_attributes>
</clone>
</resources>
[root at controller ~]#
2016-12-06 14:16 GMT+08:00 su liu <liusu8788 at gmail.com>:
> Thank you very much.
>
> Because I am new to pacemaker, and I have checked the docs that additional
> devices are needed when configing stonith, but now I does not have it in my
> environment.
>
> I will see how to config it afterward.
>
> Now I want to know how the cluster LVM works. Thank you for your patience
> explanation.
>
> The scene is:
>
> controller node + compute1 node
>
> I mount a SAN to both controller and compute1 node. Then I run a pacemaker
> + corosync + clvmd cluster:
>
> [root at controller ~]# pcs status --full
> Cluster name: mycluster
> Last updated: Tue Dec 6 14:09:59 2016 Last change: Mon Dec 5 21:26:02
> 2016 by root via cibadmin on controller
> Stack: corosync
> Current DC: compute1 (2) (version 1.1.13-10.el7_2.4-44eb2dd) - partition
> with quorum
> 2 nodes and 4 resources configured
>
> Online: [ compute1 (2) controller (1) ]
>
> Full list of resources:
>
> Clone Set: dlm-clone [dlm]
> dlm (ocf::pacemaker:controld): Started compute1
> dlm (ocf::pacemaker:controld): Started controller
> Started: [ compute1 controller ]
> Clone Set: clvmd-clone [clvmd]
> clvmd (ocf::heartbeat:clvm): Started compute1
> clvmd (ocf::heartbeat:clvm): Started controller
> Started: [ compute1 controller ]
>
> Node Attributes:
> * Node compute1 (2):
> * Node controller (1):
>
> Migration Summary:
> * Node compute1 (2):
> * Node controller (1):
>
> PCSD Status:
> controller: Online
> compute1: Online
>
> Daemon Status:
> corosync: active/disabled
> pacemaker: active/disabled
> pcsd: active/enabled
>
>
>
> step 2:
>
> I create a cluster VG:cinder-volumes:
>
> [root at controller ~]# vgdisplay
> --- Volume group ---
> VG Name cinder-volumes
> System ID
> Format lvm2
> Metadata Areas 1
> Metadata Sequence No 44
> VG Access read/write
> VG Status resizable
> Clustered yes
> Shared no
> MAX LV 0
> Cur LV 0
> Open LV 0
> Max PV 0
> Cur PV 1
> Act PV 1
> VG Size 1000.00 GiB
> PE Size 4.00 MiB
> Total PE 255999
> Alloc PE / Size 0 / 0
> Free PE / Size 255999 / 1000.00 GiB
> VG UUID aLamHi-mMcI-2NsC-Spjm-QWZr-MzHx-pPYSTt
>
> [root at controller ~]#
>
>
> Step 3 :
>
> I create a LV and I want it can be seen and accessed on the compute1 node
> but it is failed:
>
> [root at controller ~]# lvcreate --name test001 --size 1024m cinder-volumes
> Logical volume "test001" created.
> [root at controller ~]#
> [root at controller ~]#
> [root at controller ~]# lvs
> LV VG Attr LSize Pool Origin Data% Meta% Move
> Log Cpy%Sync Convert
> test001 cinder-volumes -wi-a----- 1.00g
>
> [root at controller ~]#
> [root at controller ~]#
> [root at controller ~]# ll /dev/cinder-volumes/test001
> lrwxrwxrwx 1 root root 7 Dec 6 14:13 /dev/cinder-volumes/test001 ->
> ../dm-0
>
>
>
> I can access it on the contrller node, but on the comput1 node, I can see
> it with lvs command .but cant access it with ls command, because it is not
> exists on the /dev/cinder-volumes directory:
>
>
> [root at compute1 ~]# lvs
> LV VG Attr LSize Pool Origin Data% Meta% Move
> Log Cpy%Sync Convert
> test001 cinder-volumes -wi------- 1.00g
>
> [root at compute1 ~]#
> [root at compute1 ~]#
> [root at compute1 ~]# ll /dev/cinder-volumes
> ls: cannot access /dev/cinder-volumes: No such file or directory
> [root at compute1 ~]#
> [root at compute1 ~]#
> [root at compute1 ~]# lvscan
> inactive '/dev/cinder-volumes/test001' [1.00 GiB] inherit
> [root at compute1 ~]#
>
>
>
> Is something error with my configuration besides stonith? Could you help
> me? thank you very much.
>
>
>
>
>
>
>
>
>
>
> 2016-12-06 11:37 GMT+08:00 Digimer <lists at alteeve.ca>:
>
>> On 05/12/16 10:32 PM, su liu wrote:
>> > Digimer, thank you very much!
>> >
>> > I do not need to have the data accessible on both nodes at once. I want
>> > to use the clvm+pacemaker+corosync in OpenStack Cinder.
>>
>> I'm not sure what "cinder" is, so I don't know what it needs to work.
>>
>> > then only a VM need access the LV at once. But the Cinder service which
>> > runs on the controller node is responsible for snapshotting the LVs
>> > which are attaching on the VMs runs on other Compute nodes(such as
>> > compute1 node).
>>
>> If you don't need to access an LV on more than one node at a time, then
>> don't add clustered LVM and keep things simple. If you are using DRBD,
>> keep the backup secondary. If you are using LUNs, only connect the LUN
>> to the host that needs it at a given time.
>>
>> In HA, you always want to keep things as simple as possible.
>>
>> > Need I active the LVs in /exclusively mode all the time? to supoort
>> > snapping it while attaching on the VM./
>>
>> If you use clustered LVM, yes, but then you can't access the LV on any
>> other nodes... If you don't need clustered LVM, then no, you continue to
>> use it as simple LVM.
>>
>> Note; Snapshoting VMs is NOT SAFE unless you have a way to be certain
>> that the guest VM has flushed it's caches and is made crash safe before
>> the snapshot is made. Otherwise, your snapshot might be corrupted.
>>
>> > /The following is the result when execute lvscan command on compute1
>> node:/
>> > /
>> > /
>> > /
>> > [root at compute1 ~]# lvs
>> > LV VG Attr
>> > LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
>> > volume-1b0ea468-37c8-4b47-a6fa-6cce65b068b5 cinder-volumes -wi-------
>> > 1.00g
>> >
>> >
>> >
>> > and on the controller node:
>> >
>> > [root at controller ~]# lvscan ACTIVE
>> > '/dev/cinder-volumes/volume-1b0ea468-37c8-4b47-a6fa-6cce65b068b5' [1.00
>> > GiB] inherit
>> >
>> >
>> >
>> > thank you very much!
>>
>> Did you setup stonith? If not, things will go bad. Not "if", only
>> "when". Even in a test environment, you _must_ setup stonith.
>>
>> --
>> Digimer
>> Papers and Projects: https://alteeve.ca/w/
>> What if the cure for cancer is trapped in the mind of a person without
>> access to education?
>>
>> _______________________________________________
>> Users mailing list: Users at clusterlabs.org
>> http://lists.clusterlabs.org/mailman/listinfo/users
>>
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.clusterlabs.org/pipermail/users/attachments/20161206/fc773129/attachment-0001.html>
More information about the Users
mailing list