[ClusterLabs] [pacemaker+ clvm] Cluster lvm must be active exclusively to create snapshot
Digimer
lists at alteeve.ca
Mon Dec 5 22:15:15 EST 2016
On 05/12/16 09:10 PM, su liu wrote:
> Thanks for your replay, This snapshot factor will seriously affect my
> application.
Do you really need to have the data accessible on both nodes at once? To
do this requires a cluster file system as well, like gfs2. These all
require cluster locking (DLM) which is slow compared to normal file
systems. It also adds a lot of complexity.
In my experience, most people who start thinking they want concurrent
access don't really need it, and that makes things a lot simpler.
> then, because now I have not a stonith device and I want to verify the
> basic process of snapshot a clustered LV.
Working stonith *is* part of basic process. It is integral to testing
failure and recovery. So it should be a high priority, even in a proof
of concept/test environment.
> I have a more question:
>
> After I create a VG: cinder-volumes on controller node, I can see it
> throuth vgs command on both controller and compute
> 1 nodes. then i create a
> LV:volume-1b0ea468-37c8-4b47-a6fa-6cce65b068b5. Then I execute the lvs
> command on both nodes:
>
> [root at controller ~]# lvs
> LV VG Attr
> LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
> volume-1b0ea468-37c8-4b47-a6fa-6cce65b068b5 cinder-volumes
> -wi-a----- 1.00g
> [root at controller ~]#
> [root at controller ~]#
> [root at controller ~]#
> [root at controller ~]# ll /dev/cinder-volumes/
> total 0
> lrwxrwxrwx 1 root root 7 Dec 5 21:29
> volume-1b0ea468-37c8-4b47-a6fa-6cce65b068b5 -> ../dm-0
>
>
>
> [root at compute1 ~]# lvs
> LV VG Attr
> LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
> volume-1b0ea468-37c8-4b47-a6fa-6cce65b068b5 cinder-volumes
> -wi------- 1.00g
> [root at compute1 ~]#
> [root at compute1 ~]#
> [root at compute1 ~]# ll /dev/cinder-volumes
> ls: cannot access /dev/cinder-volumes: No such file or directory
> [root at compute1 ~]#
>
>
>
> But it seems that the LV can't be exist on the compute1 node. My
> question is that how to access the LV on the compute1 node?
>
> thanks very much!
Do you see it after 'lvscan'? You should see it on both nodes at the
same time as soon as it is created, *if* things are working properly. It
is possible, without stonith, that they are not.
Please configure and test stonith, and see if the problem remains. If it
does, tail the system logs on both nodes, create the LV on the
controller and report back what log messages show up.
digimer
>
> 2016-12-06 9:26 GMT+08:00 Digimer <lists at alteeve.ca
> <mailto:lists at alteeve.ca>>:
>
> On 05/12/16 08:16 PM, su liu wrote:
> > *Hi all,
> >
> > *
> > *I am new to pacemaker and I have some questions about the clvmd +
> > pacemaker + corosync. I wish you could explain it for me if you are
> > free. thank you very much!
> >
> > *
> > *I have 2 nodes and the pacemaker's status is as follows:*
> >
> > [root at controller ~]# pcs status --full
> > Cluster name: mycluster
> > Last updated: Mon Dec 5 18:15:12 2016 Last change: Fri
> Dec 2
> > 15:01:03 2016 by root via cibadmin on compute1
> > Stack: corosync
> > Current DC: compute1 (2) (version 1.1.13-10.el7_2.4-44eb2dd) -
> partition
> > with quorum
> > 2 nodes and 4 resources configured
> >
> > Online: [ compute1 (2) controller (1) ]
> >
> > Full list of resources:
> >
> > Clone Set: dlm-clone [dlm]
> > dlm (ocf::pacemaker:controld): Started compute1
> > dlm (ocf::pacemaker:controld): Started controller
> > Started: [ compute1 controller ]
> > Clone Set: clvmd-clone [clvmd]
> > clvmd (ocf::heartbeat:clvm): Started compute1
> > clvmd (ocf::heartbeat:clvm): Started controller
> > Started: [ compute1 controller ]
> >
> > Node Attributes:
> > * Node compute1 (2):
> > * Node controller (1):
> >
> > Migration Summary:
> > * Node compute1 (2):
> > * Node controller (1):
> >
> > PCSD Status:
> > controller: Online
> > compute1: Online
> >
> > Daemon Status:
> > corosync: active/disabled
> > pacemaker: active/disabled
> > pcsd: active/enabled
> > *
> > *
>
> You need to configure and enable (and test!) stonith. This is
> doubly-so
> with clustered LVM/shared storage.
>
> > *I create a lvm on controller node and it can be seen on the
> compute1
> > node immediately with 'lvs' command. but the lvm it not activate on
> > compute1.
> >
> > *
> > *then i want to create a snapshot of the lvm, but failed with
> the error
> > message:*
> >
> > /### volume-4fad87bb-3d4c-4a96-bef1-8799980050d1 must be active
> > exclusively to create snapshot ###
> >
> > /
> > *Can someone tell me how to snapshot a lvm in the cluster lvm
> > environment? thank you very much。*
>
> This is how it works. You can't snapshot a clustered LV, as the error
> indicates. The process is ACTIVE -> deactivate on all node -> set
> exclusive on one node -> set it back to ACTIVE, then you can snapshot.
>
> It's not very practical, unfortunately.
>
> > Additional information:
> >
> > [root at controller ~]# vgdisplay
> > --- Volume group ---
> > VG Name cinder-volumes
> > System ID
> > Format lvm2
> > Metadata Areas 1
> > Metadata Sequence No 19
> > VG Access read/write
> > VG Status resizable
> > Clustered yes
> > Shared no
> > MAX LV 0
> > Cur LV 1
> > Open LV 0
> > Max PV 0
> > Cur PV 1
> > Act PV 1
> > VG Size 1000.00 GiB
> > PE Size 4.00 MiB
> > Total PE 255999
> > Alloc PE / Size 256 / 1.00 GiB
> > Free PE / Size 255743 / 999.00 GiB
> > VG UUID aLamHi-mMcI-2NsC-Spjm-QWZr-MzHx-pPYSTt
> >
> > [root at controller ~]# rpm -qa |grep pacem
> > pacemaker-cli-1.1.13-10.el7_2.4.x86_64
> > pacemaker-libs-1.1.13-10.el7_2.4.x86_64
> > pacemaker-1.1.13-10.el7_2.4.x86_64
> > pacemaker-cluster-libs-1.1.13-10.el7_2.4.x86_64
> >
> >
> > [root at controller ~]# lvs
> > LV VG Attr
> > LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
> > volume-1b0ea468-37c8-4b47-a6fa-6cce65b068b5 cinder-volumes
> -wi-a-----
> > 1.00g
> >
> >
> > [root at compute1 ~]# lvs
> > LV VG Attr
> > LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
> > volume-1b0ea468-37c8-4b47-a6fa-6cce65b068b5 cinder-volumes
> -wi-------
> > 1.00g
> >
> >
> > thank you very much!
> >
> >
> >
> >
> >
> > _______________________________________________
> > Users mailing list: Users at clusterlabs.org
> <mailto:Users at clusterlabs.org>
> > http://lists.clusterlabs.org/mailman/listinfo/users
> <http://lists.clusterlabs.org/mailman/listinfo/users>
> >
> > Project Home: http://www.clusterlabs.org
> > Getting started:
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> <http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf>
> > Bugs: http://bugs.clusterlabs.org
> >
>
>
> --
> Digimer
> Papers and Projects: https://alteeve.ca/w/
> What if the cure for cancer is trapped in the mind of a person without
> access to education?
>
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> <mailto:Users at clusterlabs.org>
> http://lists.clusterlabs.org/mailman/listinfo/users
> <http://lists.clusterlabs.org/mailman/listinfo/users>
>
> Project Home: http://www.clusterlabs.org
> Getting started:
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> <http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf>
> Bugs: http://bugs.clusterlabs.org
>
>
>
>
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
--
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without access to education?
More information about the Users
mailing list