[ClusterLabs] Antw: [EXT] Re: Q: LVM-activate a shared LV

Ulrich Windl Ulrich.Windl at rz.uni-regensburg.de
Fri Dec 11 03:38:08 EST 2020


Hi!

Serveral resources are unrelated, and I wanted to keep the complexity low.
Specifically because the problem seems to be related to LVM LV activation. So
what could be related is lvmlockd and DLM resources, but not more. Agree?

I'm using SLES15 SP2.

The DLM-related config is:
primitive prm_DLM ocf:pacemaker:controld \
        op start timeout=90 interval=0 \
        op stop timeout=120 interval=0 \
        op monitor interval=60 timeout=60
clone cln_DLM prm_DLM \
        meta interleave=true
colocation col_lvmlockd_DLM inf: cln_lvmlockd cln_DLM
colocation col_raid_DLM inf:  cln_test_raid_md0 cln_DLM
order ord_DLM__lvmlockd Mandatory: cln_DLM cln_lvmlockd
order ord_DLM_raid Mandatory: cln_DLM cln_test_raid_md0
clone cln_lvmlockd prm_lvmlockd \
        meta interleave=true
colocation col_lvm_activate__lvmlockd inf: ( cln_testVG0_DVD_activate
cln_testVG0_test-jeos_activate ) cln_lvmlockd
order ord_lvmlockd__lvm_activate Mandatory: cln_lvmlockd cln_testVG0_activate

So the state is:
Clone Set: cln_DLM [prm_DLM]: running on all nodes
Clone Set: cln_lvmlockd [prm_lvmlockd]: running on all nodes
Clone Set: cln_test_raid_md0 [prm_test_raid_md0]: running on all nodes
(provides clustered md0 as PV for test VG0)
Clone Set: cln_testVG0_test-jeos_activate [prm_testVG0_test-jeos_activate]:
Running on one node only
...

Well while preparing this material, I found that I missed an order and a
colocation for lvmlockd, but that priomitive was running anyway, so it was
_not_ the problem.
The real problem was: I had created the LV locally, and it seems it was
activated in a way incompatible with lvmlockd.
After "# lvchange -a n testVG0/test-jeos" and "crm_resource -C -r
prm_testVG0_test-jeos_activate" I have:
Clone Set: cln_testVG0_DVD_activate [prm_testVG0_DVD_activate]: running on all
nodes

The cluster also wanted a "crm resource refresh" after manual LV deactivation
to activate it again.


There still is another problem that needed to have this fixed first, but maybe
that's worth another thread ;-)

Regards,
Ulrich

>>> Gang He <GHe at suse.com> schrieb am 11.12.2020 um 06:18 in Nachricht
<HE1PR0401MB2236598E3946496741156EFBCFCA0 at HE1PR0401MB2236.eurprd04.prod.outlook.
om>:
> Hi Ulrish
> 
> Which Linux distribution/version do you use? could you share the whole crm 
> configure?
> 
> There is a crm configuration demo for your reference.
> primitive dlm ocf:pacemaker:controld \
>         op start interval=0 timeout=90 \
>         op stop interval=0 timeout=100 \
>         op monitor interval=20 timeout=600
> primitive libvirt_stonith stonith:external/libvirt \
>         params hostlist="ghe‑nd1,ghe‑nd2,ghe‑nd3" 
> hypervisor_uri="qemu+tcp://10.67.160.2/system" \
>         op monitor interval=60
> primitive lvmlockd lvmlockd \
>         op start timeout=90 interval=0 \
>         op stop timeout=100 interval=0 \
>         op monitor interval=30 timeout=90
> primitive ocfs2‑rear Filesystem \
>         params device="/dev/TEST1_vg/test1_lv" directory="/rear" 
> fstype=ocfs2 options=acl \
>         op monitor interval=20 timeout=60 \
>         op start timeout=60 interval=0 \
>         op stop timeout=180 interval=0 
> primitive test1_vg LVM‑activate \
>         params vgname=TEST1_vg vg_access_mode=lvmlockd 
> activation_mode=shared \
>         op start timeout=90s interval=0 \
>         op stop timeout=90s interval=0 \
>         op monitor interval=30s timeout=90s
> group base‑group dlm lvmlockd test1_vg ocfs2‑rear
> clone base‑clone base‑group
> property cib‑bootstrap‑options: \
>         have‑watchdog=false \
>         stonith‑enabled=true \
>         dc‑version="2.0.4+20200616.2deceaa3a‑3.3.1‑2.0.4+20200616.2deceaa3a"
\
>         cluster‑infrastructure=corosync \
>         cluster‑name=cluster \
>         last‑lrm‑refresh=1606730020
> 
> 
> 
> Thanks
> Gang 
> 
> ________________________________________
> From: Users <users‑bounces at clusterlabs.org> on behalf of Ulrich Windl 
> <Ulrich.Windl at rz.uni‑regensburg.de>
> Sent: Thursday, December 10, 2020 22:55
> To: users at clusterlabs.org 
> Subject: [ClusterLabs] Q: LVM‑activate a shared LV
> 
> Hi!
> 
> I configured a clustered LV (I think) for activation on three nodes, but it

> won't work. Error is:
>  LVM‑activate(prm_testVG0_test‑jeos_activate)[48844]: ERROR:  LV locked by 
> other host: testVG0/test‑jeos Failed to lock logical volume
testVG0/test‑jeos.
> 
> primitive prm_testVG0_test‑jeos_activate LVM‑activate \
>         params vgname=testVG0 lvname=test‑jeos activation_mode=shared 
> vg_access_mode=lvmlockd \
>         op start timeout=90s interval=0 \
>         op stop timeout=90s interval=0 \
>         op monitor interval=60s timeout=90s
> clone cln_testVG0_test‑jeos_activate prm_testVG0_test‑jeos_activate \
>         meta interleave=true
> 
> Is this a software bug, or am I using the wrong RA or configuration?
> 
> Regards,
> Ulrich
> 
> 
> 
> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users 
> 
> ClusterLabs home: https://www.clusterlabs.org/ 
> 
> 
> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users 
> 
> ClusterLabs home: https://www.clusterlabs.org/ 





More information about the Users mailing list