<div dir="ltr"><div><div><div><div>Hi Emmanuel, thank you for you support. I did a lot of checks during the WE and there are some updates:<br></div>- Main problem is that ocf:heartbeat:LVM is old. The current version on centos 7 is 3.9.5 (package resource-agents). More precisely, in 3.9.5 the monitor function has one important assumption: the underlying storage is shared between all nodes in the cluster. So the monitor function checks the presence of the volume group on all nodes. From version 3.9.6 this is not the normal behavior and the monitor function (LVM_status) returns $OCF_NOT_RUNNING from slaves nodes without errors. You can check this in the file /usr/lib/ocf/resource.d/heartbeat/LVM in lines 340-351 that disappears in version 3.9.6.<br><br></div>Obviously this is not error, but an important change in the cluster architecture because I need to use drbd in dual primary mode when version 3.9.5 is used. My personal idea is that drbd in dual primary mode with lvm is not a good idea due to the fact that I don't need an active/active cluster.<br><br></div>Anyway, thank you for your time again<br></div>Marco<br></div><div class="gmail_extra"><br><div class="gmail_quote">2018-04-13 15:54 GMT+02:00 emmanuel segura <span dir="ltr"><<a href="mailto:emi2fast@gmail.com" target="_blank">emi2fast@gmail.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div>the first thing that you need to configure is the stonith, because you have this constraint "constraint order promote DrbdResClone then start HALVM"<br><br></div>To recover and promote drbd to master when you crash a node, configurare the drbd fencing handler.<br><br></div>pacemaker execute monitor in both nodes, so this is normal, to test why monitor fail, use <span id="m_8470483581300695003gmail-:tt.co" class="m_8470483581300695003gmail-tL8wMe m_8470483581300695003gmail-EMoHub" style="text-align:left" dir="ltr">ocf-tester</span><br></div><div class="gmail_extra"><br><div class="gmail_quote"><div><div class="h5">2018-04-13 15:29 GMT+02:00 Marco Marino <span dir="ltr"><<a href="mailto:marino.mrc@gmail.com" target="_blank">marino.mrc@gmail.com</a>></span>:<br></div></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div class="h5"><div dir="ltr"><div><div><div>Hello, I'm trying to configure a simple 2 node cluster with drbd and HALVM (ocf:heartbeat:LVM) but I have a problem that I'm not able to solve, to I decided to write this long post. I need to really understand what I'm doing and where I'm doing wrong. <br></div>More precisely, I'm configuring a pacemaker cluster with 2 nodes and only one drbd resource. Here all operations:<br><br>- System configuration<br> hostnamectl set-hostname pcmk[12]<br> yum update -y<br> yum install vim wget git -y<br> vim /etc/sysconfig/selinux -> permissive mode<br> systemctl disable firewalld<br> reboot<br><br>- Network configuration<br> [pcmk1]<br> nmcli connection modify corosync ipv4.method manual ipv4.addresses <a href="http://192.168.198.201/24" target="_blank">192.168.198.201/24</a> ipv6.method ignore connection.autoconnect yes<br> nmcli connection modify replication ipv4.method manual ipv4.addresses <a href="http://192.168.199.201/24" target="_blank">192.168.199.201/24</a> ipv6.method ignore connection.autoconnect yes<br> [pcmk2]<br> nmcli connection modify corosync ipv4.method manual ipv4.addresses <a href="http://192.168.198.202/24" target="_blank">192.168.198.202/24</a> ipv6.method ignore connection.autoconnect yes<br> nmcli connection modify replication ipv4.method manual ipv4.addresses <a href="http://192.168.199.202/24" target="_blank">192.168.199.202/24</a> ipv6.method ignore connection.autoconnect yes<br><br> ssh-keyget -t rsa<br> ssh-copy-id root@pcmk[12]<br> scp /etc/hosts root@pcmk2:/etc/hosts<br><br>- Drbd Repo configuration and drbd installation<br> rpm --import <a href="https://www.elrepo.org/RPM-GPG-KEY-elrepo.org" target="_blank">https://www.elrepo.org/RPM-GPG<wbr>-KEY-elrepo.org</a><br> rpm -Uvh <a href="http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm" target="_blank">http://www.elrepo.org/elrepo-r<wbr>elease-7.0-3.el7.elrepo.noarch<wbr>.rpm</a><br> yum update -y<br> yum install drbd84-utils kmod-drbd84 -y<br><br>- Drbd Configuration:<br> Creating a new partition on top of /dev/vdb -> /dev/vdb1 of type "Linux" (83)<br> [/etc/drbd.d/global_common.con<wbr>f]<br> usage-count no;<br> [/etc/drbd.d/myres.res]<br> resource myres {<br> on pcmk1 {<br> device /dev/drbd0;<br> disk /dev/vdb1;<br> address <a href="http://192.168.199.201:7789" target="_blank">192.168.199.201:7789</a>;<br> meta-disk internal;<br> }<br> on pcmk2 {<br> device /dev/drbd0;<br> disk /dev/vdb1;<br> address <a href="http://192.168.199.202:7789" target="_blank">192.168.199.202:7789</a>;<br> meta-disk internal;<br> }<br> }<br><br> scp /etc/drbd.d/myres.res root@pcmk2:/etc/drbd.d/myres.r<wbr>es<br> systemctl start drbd <-- only for test. The service is disabled at boot!<br> drbdadm create-md myres<br> drbdadm up myres<br> drbdadm primary --force myres<br><br>- LVM Configuration<br> [root@pcmk1 ~]# lsblk<br> NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT<br> sr0 11:0 1 1024M 0 rom <br> vda 252:0 0 20G 0 disk <br> ├─vda1 252:1 0 1G 0 part /boot<br> └─vda2 252:2 0 19G 0 part <br> ├─cl-root 253:0 0 17G 0 lvm /<br> └─cl-swap 253:1 0 2G 0 lvm [SWAP]<br> vdb 252:16 0 8G 0 disk <br> └─vdb1 252:17 0 8G 0 part <--- /dev/vdb1 is the partition I'd like to use as backing device for drbd <br> └─drbd0 147:0 0 8G 0 disk <br><br> [/etc/lvm/lvm.conf]<br> write_cache_state = 0<br> use_lvmetad = 0<br> filter = [ "a|drbd.*|", "a|vda.*|", "r|.*|" ]<br><br> Disabling lvmetad service<br> systemctl disable lvm2-lvmetad.service<br> systemctl disable lvm2-lvmetad.socket<br> reboot<br><br>- Creating volume group and logical volume<br> systemctl start drbd (both nodes)<br> drbdadm primary myres<br> pvcreate /dev/drbd0<br> vgcreate havolumegroup /dev/drbd0<br> lvcreate -n c-vol1 -L1G havolumegroup<br> [root@pcmk1 ~]# lvs<br> LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert<br> root cl -wi-ao---- <17.00g <wbr> <br> swap cl -wi-ao---- 2.00g <wbr> <br> c-vol1 havolumegroup -wi-a----- 1.00g <br><br> <br>- Cluster Configuration<br> yum install pcs fence-agents-all -y<br> systemctl enable pcsd<br> systemctl start pcsd<br> echo redhat | passwd --stdin hacluster<br> pcs cluster auth pcmk1 pcmk2<br> pcs cluster setup --name ha_cluster pcmk1 pcmk2<br> pcs cluster start --all<br> pcs cluster enable --all<br> pcs property set stonith-enabled=false <--- Just for test!!!<br> pcs property set no-quorum-policy=ignore<br><br>- Drbd resource configuration<br> pcs cluster cib drbd_cfg<br> pcs -f drbd_cfg resource create DrbdRes ocf:linbit:drbd drbd_resource=myres op monitor interval=60s<br> pcs -f drbd_cfg resource master DrbdResClone DrbdRes master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true<br> [root@pcmk1 ~]# pcs -f drbd_cfg resource show<br> Master/Slave Set: DrbdResClone [DrbdRes]<br> Stopped: [ pcmk1 pcmk2 ]<br> [root@pcmk1 ~]#<br><br> Testing the failover with a forced shutoff of pcmk1. When pcmk1 returns up, drbd is slave but logical volume is not active on pcmk2. So I need HALVM<br> [root@pcmk2 ~]# lvs<br> LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert<br> root cl -wi-ao---- <17.00g <wbr> <br> swap cl -wi-ao---- 2.00g <wbr> <br> c-vol1 havolumegroup -wi------- 1.00g <wbr> <br> [root@pcmk2 ~]#<br><br><br><br>- Lvm resource and constraints<br> pcs cluster cib lvm_cfg<br> pcs -f lvm_cfg resource create HALVM ocf:heartbeat:LVM volgrpname=havolumegroup<br> pcs -f lvm_cfg constraint colocation add HALVM with master DrbdResClone INFINITY<br> pcs -f lvm_cfg constraint order promote DrbdResClone then start HALVM<br><br> [root@pcmk1 ~]# pcs -f lvm_cfg constraint<br> Location Constraints:<br> Ordering Constraints:<br> promote DrbdResClone then start HALVM (kind:Mandatory)<br> Colocation Constraints:<br> HALVM with DrbdResClone (score:INFINITY) (rsc-role:Started) (with-rsc-role:Master)<br> Ticket Constraints:<br> [root@pcmk1 ~]#<br><br><br> [root@pcmk1 ~]# pcs status<br> Cluster name: ha_cluster<br> Stack: corosync<br> Current DC: pcmk2 (version 1.1.16-12.el7_4.8-94ff4df) - partition with quorum<br> Last updated: Fri Apr 13 15:12:49 2018<br> Last change: Fri Apr 13 15:05:18 2018 by root via cibadmin on pcmk1<br><br> 2 nodes configured<br> 2 resources configured<br><br> Online: [ pcmk1 pcmk2 ]<br><br> Full list of resources:<br><br> Master/Slave Set: DrbdResClone [DrbdRes]<br> Masters: [ pcmk2 ]<br> Slaves: [ pcmk1 ]<br><br> Daemon Status:<br> corosync: active/enabled<br> pacemaker: active/enabled<br> pcsd: active/enabled<br><br> #########[PUSHING NEW CONFIGURATION]#########<br> [root@pcmk1 ~]# pcs cluster cib-push lvm_cfg <br> CIB updated<br> [root@pcmk1 ~]# pcs status<br> Cluster name: ha_cluster<br> Stack: corosync<br> Current DC: pcmk2 (version 1.1.16-12.el7_4.8-94ff4df) - partition with quorum<br> Last updated: Fri Apr 13 15:12:57 2018<br> Last change: Fri Apr 13 15:12:55 2018 by root via cibadmin on pcmk1<br><br> 2 nodes configured<br> 3 resources configured<br><br> Online: [ pcmk1 pcmk2 ]<br><br> Full list of resources:<br><br> Master/Slave Set: DrbdResClone [DrbdRes]<br> Masters: [ pcmk2 ]<br> Slaves: [ pcmk1 ]<br> HALVM (ocf::heartbeat:LVM): Started pcmk2<br><br> Failed Actions:<br> * HALVM_monitor_0 on pcmk1 'unknown error' (1): call=13, status=complete, exitreason='LVM Volume havolumegroup is not available',<br> last-rc-change='Fri Apr 13 15:12:56 2018', queued=0ms, exec=52ms<br><br><br> Daemon Status:<br> corosync: active/enabled<br> pacemaker: active/enabled<br> pcsd: active/enabled<br> [root@pcmk1 ~]#<br><br><br> ##########[TRYING TO CLEANUP RESOURCE CONFIGURATION]################<wbr>##<br> [root@pcmk1 ~]# pcs resource cleanup<br> Waiting for 1 replies from the CRMd. OK<br> [root@pcmk1 ~]# pcs status<br> Cluster name: ha_cluster<br> Stack: corosync<br> Current DC: pcmk2 (version 1.1.16-12.el7_4.8-94ff4df) - partition with quorum<br> Last updated: Fri Apr 13 15:13:18 2018<br> Last change: Fri Apr 13 15:12:55 2018 by root via cibadmin on pcmk1<br><br> 2 nodes configured<br> 3 resources configured<br><br> Online: [ pcmk1 pcmk2 ]<br><br> Full list of resources:<br><br> Master/Slave Set: DrbdResClone [DrbdRes]<br> Masters: [ pcmk2 ]<br> Slaves: [ pcmk1 ]<br> HALVM (ocf::heartbeat:LVM): Started pcmk2<br><br> Failed Actions:<br> * HALVM_monitor_0 on pcmk1 'unknown error' (1): call=26, status=complete, exitreason='LVM Volume havolumegroup is not available',<br> last-rc-change='Fri Apr 13 15:13:17 2018', queued=0ms, exec=113ms<br><br><br> Daemon Status:<br> corosync: active/enabled<br> pacemaker: active/enabled<br> pcsd: active/enabled<br> [root@pcmk1 ~]#<br>##############################<wbr>###########################</div><div>some details about packages and versions:</div><div>[root@pcmk1 ~]# rpm -qa | grep pacem<br>pacemaker-cluster-libs-1.1.16-<wbr>12.el7_4.8.x86_64<br>pacemaker-libs-1.1.16-12.el7_4<wbr>.8.x86_64<br>pacemaker-1.1.16-12.el7_4.8.x8<wbr>6_64<br>pacemaker-cli-1.1.16-12.el7_4.<wbr>8.x86_64<br>[root@pcmk1 ~]# rpm -qa | grep coro<br>corosynclib-2.4.0-9.el7_4.2.x8<wbr>6_64<br>corosync-2.4.0-9.el7_4.2.x86_6<wbr>4<br>[root@pcmk1 ~]# rpm -qa | grep drbd<br>drbd84-utils-9.1.0-1.el7.elrep<wbr>o.x86_64<br>kmod-drbd84-8.4.10-1_2.el7_4.e<wbr>lrepo.x86_64<br>[root@pcmk1 ~]# cat /etc/redhat-release <br>CentOS Linux release 7.4.1708 (Core) <br>[root@pcmk1 ~]# uname -r<br>3.10.0-693.21.1.el7.x86_64<br>[root@pcmk1 ~]#<br></div><div>##############################<wbr>##############################<wbr>##</div><div><br></div><div><br></div>So it seems to me that the problem is that the "monitor" action of the ocf:heartbeat:LVM resource is executed on both nodes even if I configured specific colocation and ordering constraints. I don't know where the problem is, but please I need to understand how to solve the issue. Please, if possible I invite someone to reproduce the configuration and possibly the issue. It seems a bug but obviously I'm not sure. What I'm worried is that it should be pacemaker that states where and when one resource should start so probably there is something wrong in my constraints configuration.</div><div>I'm sorry for this long post. <br></div><div>Thank you,</div><div>Marco<br></div><div><br></div></div>
<br></div></div>______________________________<wbr>_________________<br>
Users mailing list: <a href="mailto:Users@clusterlabs.org" target="_blank">Users@clusterlabs.org</a><br>
<a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.clusterlabs.org/<wbr>mailman/listinfo/users</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/doc<wbr>/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
<br></blockquote></div><span class="HOEnZb"><font color="#888888"><br><br clear="all"><br>-- <br><div class="m_8470483581300695003gmail_signature" data-smartmail="gmail_signature"> .~.<br> /V\<br> // \\<br>/( )\<br>^`~'^</div>
</font></span></div>
<br>______________________________<wbr>_________________<br>
Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
<a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.clusterlabs.org/<wbr>mailman/listinfo/users</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/<wbr>doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
<br></blockquote></div><br></div>