[ClusterLabs] Antw: [pacemaker+ clvm] Cluster lvm must be active exclusively to create snapshot
Ulrich Windl
Ulrich.Windl at rz.uni-regensburg.de
Tue Dec 6 03:23:08 EST 2016
>>> su liu <liusu8788 at gmail.com> schrieb am 06.12.2016 um 02:16 in Nachricht
<CAN2gjWR_nSOGeZoc7RK_6WicA2L=KM_1=Ar9K5--qi7q3vFffw at mail.gmail.com>:
> *Hi all,*
>
>
> *I am new to pacemaker and I have some questions about the clvmd +
> pacemaker + corosync. I wish you could explain it for me if you are free.
> thank you very much!*
> *I have 2 nodes and the pacemaker's status is as follows:*
>
> [root at controller ~]# pcs status --full
> Cluster name: mycluster
> Last updated: Mon Dec 5 18:15:12 2016 Last change: Fri Dec 2
> 15:01:03 2016 by root via cibadmin on compute1
> Stack: corosync
> Current DC: compute1 (2) (version 1.1.13-10.el7_2.4-44eb2dd) - partition
> with quorum
> 2 nodes and 4 resources configured
>
> Online: [ compute1 (2) controller (1) ]
>
> Full list of resources:
>
> Clone Set: dlm-clone [dlm]
> dlm (ocf::pacemaker:controld): Started compute1
> dlm (ocf::pacemaker:controld): Started controller
> Started: [ compute1 controller ]
> Clone Set: clvmd-clone [clvmd]
> clvmd (ocf::heartbeat:clvm): Started compute1
> clvmd (ocf::heartbeat:clvm): Started controller
> Started: [ compute1 controller ]
>
> Node Attributes:
> * Node compute1 (2):
> * Node controller (1):
>
> Migration Summary:
> * Node compute1 (2):
> * Node controller (1):
>
> PCSD Status:
> controller: Online
> compute1: Online
>
> Daemon Status:
> corosync: active/disabled
> pacemaker: active/disabled
> pcsd: active/enabled
>
>
>
> *I create a lvm on controller node and it can be seen on the compute1
> node immediately with 'lvs' command. but the lvm it not activate on
> compute1.*
> *then i want to create a snapshot of the lvm, but failed with the error
> message:*
>
>
>
> *### volume-4fad87bb-3d4c-4a96-bef1-8799980050d1 must be active exclusively
> to create snapshot ###*
> *Can someone tell me how to snapshot a lvm in the cluster lvm environment?
> thank you very much。*
Did you try "vgchange -a e ..."?
>
>
> Additional information:
>
> [root at controller ~]# vgdisplay
> --- Volume group ---
> VG Name cinder-volumes
> System ID
> Format lvm2
> Metadata Areas 1
> Metadata Sequence No 19
> VG Access read/write
> VG Status resizable
> Clustered yes
> Shared no
> MAX LV 0
> Cur LV 1
> Open LV 0
> Max PV 0
> Cur PV 1
> Act PV 1
> VG Size 1000.00 GiB
> PE Size 4.00 MiB
> Total PE 255999
> Alloc PE / Size 256 / 1.00 GiB
> Free PE / Size 255743 / 999.00 GiB
> VG UUID aLamHi-mMcI-2NsC-Spjm-QWZr-MzHx-pPYSTt
>
> [root at controller ~]# rpm -qa |grep pacem
> pacemaker-cli-1.1.13-10.el7_2.4.x86_64
> pacemaker-libs-1.1.13-10.el7_2.4.x86_64
> pacemaker-1.1.13-10.el7_2.4.x86_64
> pacemaker-cluster-libs-1.1.13-10.el7_2.4.x86_64
>
>
> [root at controller ~]# lvs
> LV VG Attr
> LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
> volume-1b0ea468-37c8-4b47-a6fa-6cce65b068b5 cinder-volumes -wi-a-----
> 1.00g
>
>
> [root at compute1 ~]# lvs
> LV VG Attr
> LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
> volume-1b0ea468-37c8-4b47-a6fa-6cce65b068b5 cinder-volumes -wi-------
> 1.00g
>
>
> thank you very much!
More information about the Users
mailing list