<div dir="ltr">Thank you very much.<div><br></div><div>Because I am new to pacemaker, and I have checked the docs that additional devices are needed when configing stonith, but now I does not have it in my environment.</div><div><br></div><div>I will see how to config it afterward.</div><div><br></div><div>Now I want to know how the cluster LVM works. Thank you for your patience explanation.</div><div><br></div><div>The scene is:</div><div><br></div><div>controller node + compute1 node </div><div><br></div><div>I mount a SAN to both controller and compute1 node. Then I run a pacemaker + corosync + clvmd cluster:</div><div><br></div><div><div>[root@controller ~]# pcs status --full</div><div>Cluster name: mycluster</div><div>Last updated: Tue Dec 6 14:09:59 2016<span class="gmail-Apple-tab-span" style="white-space:pre"> </span>Last change: Mon Dec 5 21:26:02 2016 by root via cibadmin on controller</div><div>Stack: corosync</div><div>Current DC: compute1 (2) (version 1.1.13-10.el7_2.4-44eb2dd) - partition with quorum</div><div>2 nodes and 4 resources configured</div><div><br></div><div>Online: [ compute1 (2) controller (1) ]</div><div><br></div><div>Full list of resources:</div><div><br></div><div> Clone Set: dlm-clone [dlm]</div><div> dlm<span class="gmail-Apple-tab-span" style="white-space:pre"> </span>(ocf::pacemaker:controld):<span class="gmail-Apple-tab-span" style="white-space:pre"> </span>Started compute1</div><div> dlm<span class="gmail-Apple-tab-span" style="white-space:pre"> </span>(ocf::pacemaker:controld):<span class="gmail-Apple-tab-span" style="white-space:pre"> </span>Started controller</div><div> Started: [ compute1 controller ]</div><div> Clone Set: clvmd-clone [clvmd]</div><div> clvmd<span class="gmail-Apple-tab-span" style="white-space:pre"> </span>(ocf::heartbeat:clvm):<span class="gmail-Apple-tab-span" style="white-space:pre"> </span>Started compute1</div><div> clvmd<span class="gmail-Apple-tab-span" style="white-space:pre"> </span>(ocf::heartbeat:clvm):<span class="gmail-Apple-tab-span" style="white-space:pre"> </span>Started controller</div><div> Started: [ compute1 controller ]</div><div><br></div><div>Node Attributes:</div><div>* Node compute1 (2):</div><div>* Node controller (1):</div><div><br></div><div>Migration Summary:</div><div>* Node compute1 (2):</div><div>* Node controller (1):</div><div><br></div><div>PCSD Status:</div><div> controller: Online</div><div> compute1: Online</div><div><br></div><div>Daemon Status:</div><div> corosync: active/disabled</div><div> pacemaker: active/disabled</div><div> pcsd: active/enabled</div></div><div><br></div><div><br></div><div><br></div><div>step 2:</div><div><br></div><div>I create a cluster VG:cinder-volumes:</div><div><br></div><div><div>[root@controller ~]# vgdisplay </div><div> --- Volume group ---</div><div> VG Name cinder-volumes</div><div> System ID </div><div> Format lvm2</div><div> Metadata Areas 1</div><div> Metadata Sequence No 44</div><div> VG Access read/write</div><div> VG Status resizable</div><div> Clustered yes</div><div> Shared no</div><div> MAX LV 0</div><div> Cur LV 0</div><div> Open LV 0</div><div> Max PV 0</div><div> Cur PV 1</div><div> Act PV 1</div><div> VG Size 1000.00 GiB</div><div> PE Size 4.00 MiB</div><div> Total PE 255999</div><div> Alloc PE / Size 0 / 0 </div><div> Free PE / Size 255999 / 1000.00 GiB</div><div> VG UUID aLamHi-mMcI-2NsC-Spjm-QWZr-MzHx-pPYSTt</div><div> </div><div>[root@controller ~]#</div></div><div><br></div><div><br></div><div>Step 3 : </div><div><br></div><div>I create a LV and I want it can be seen and accessed on the compute1 node but it is failed:</div><div><br></div><div><div>[root@controller ~]# lvcreate --name test001 --size 1024m cinder-volumes</div><div> Logical volume "test001" created.</div><div>[root@controller ~]# </div><div>[root@controller ~]# </div><div>[root@controller ~]# lvs</div><div> LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert</div><div> test001 cinder-volumes -wi-a----- 1.00g </div><div>[root@controller ~]# </div><div>[root@controller ~]# </div><div>[root@controller ~]# ll /dev/cinder-volumes/test001 </div><div>lrwxrwxrwx 1 root root 7 Dec 6 14:13 /dev/cinder-volumes/test001 -> ../dm-0</div></div><div><br></div><div><br></div><div><br></div><div>I can access it on the contrller node, but on the comput1 node, I can see it with lvs command .but cant access it with ls command, because it is not exists on the /dev/cinder-volumes directory:</div><div><br></div><div><br></div><div><div>[root@compute1 ~]# lvs</div><div> LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert</div><div> test001 cinder-volumes -wi------- 1.00g </div><div>[root@compute1 ~]# </div><div>[root@compute1 ~]# </div><div>[root@compute1 ~]# ll /dev/cinder-volumes</div><div>ls: cannot access /dev/cinder-volumes: No such file or directory</div><div>[root@compute1 ~]# </div><div>[root@compute1 ~]# </div><div>[root@compute1 ~]# lvscan </div><div> inactive '/dev/cinder-volumes/test001' [1.00 GiB] inherit</div><div>[root@compute1 ~]#</div></div><div><br></div><div><br></div><div><br></div><div>Is something error with my configuration besides stonith? Could you help me? thank you very much.</div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">2016-12-06 11:37 GMT+08:00 Digimer <span dir="ltr"><<a href="mailto:lists@alteeve.ca" target="_blank">lists@alteeve.ca</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On 05/12/16 10:32 PM, su liu wrote:<br>
> Digimer, thank you very much!<br>
><br>
> I do not need to have the data accessible on both nodes at once. I want<br>
> to use the clvm+pacemaker+corosync in OpenStack Cinder.<br>
<br>
</span>I'm not sure what "cinder" is, so I don't know what it needs to work.<br>
<span class=""><br>
> then only a VM need access the LV at once. But the Cinder service which<br>
> runs on the controller node is responsible for snapshotting the LVs<br>
> which are attaching on the VMs runs on other Compute nodes(such as<br>
> compute1 node).<br>
<br>
</span>If you don't need to access an LV on more than one node at a time, then<br>
don't add clustered LVM and keep things simple. If you are using DRBD,<br>
keep the backup secondary. If you are using LUNs, only connect the LUN<br>
to the host that needs it at a given time.<br>
<br>
In HA, you always want to keep things as simple as possible.<br>
<br>
> Need I active the LVs in /exclusively mode all the time? to supoort<br>
> snapping it while attaching on the VM./<br>
<br>
If you use clustered LVM, yes, but then you can't access the LV on any<br>
other nodes... If you don't need clustered LVM, then no, you continue to<br>
use it as simple LVM.<br>
<br>
Note; Snapshoting VMs is NOT SAFE unless you have a way to be certain<br>
that the guest VM has flushed it's caches and is made crash safe before<br>
the snapshot is made. Otherwise, your snapshot might be corrupted.<br>
<br>
> /The following is the result when execute lvscan command on compute1 node:/<br>
> /<br>
> /<br>
> /<br>
<span class="">> [root@compute1 ~]# lvs<br>
> LV VG Attr<br>
> LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert<br>
> volume-1b0ea468-37c8-4b47-<wbr>a6fa-6cce65b068b5 cinder-volumes -wi-------<br>
> 1.00g<br>
><br>
><br>
><br>
> and on the controller node:<br>
><br>
> [root@controller ~]# lvscan ACTIVE<br>
> '/dev/cinder-volumes/volume-<wbr>1b0ea468-37c8-4b47-a6fa-<wbr>6cce65b068b5' [1.00<br>
> GiB] inherit<br>
><br>
><br>
><br>
> thank you very much!<br>
<br>
</span>Did you setup stonith? If not, things will go bad. Not "if", only<br>
"when". Even in a test environment, you _must_ setup stonith.<br>
<div class="HOEnZb"><div class="h5"><br>
--<br>
Digimer<br>
Papers and Projects: <a href="https://alteeve.ca/w/" rel="noreferrer" target="_blank">https://alteeve.ca/w/</a><br>
What if the cure for cancer is trapped in the mind of a person without<br>
access to education?<br>
<br>
______________________________<wbr>_________________<br>
Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
<a href="http://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.clusterlabs.org/<wbr>mailman/listinfo/users</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/<wbr>doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
</div></div></blockquote></div><br></div>