[ClusterLabs] Q: What is lvmlockd locking?

Roger Zhou zzhou at suse.com
Fri Jan 22 06:09:02 EST 2021


On 1/22/21 6:58 PM, Ulrich Windl wrote:
>>>> Roger Zhou <zzhou at suse.com> schrieb am 22.01.2021 um 11:26 in Nachricht
> <8dcd53e2-b65b-aafe-ae29-7bdeea3b881a at suse.com>:
> 
>> On 1/22/21 5:45 PM, Ulrich Windl wrote:
>>>>>> Roger Zhou <zzhou at suse.com> schrieb am 22.01.2021 um 10:18 in Nachricht
>>> <a0c97354-937a-d6e3-b787-25c0ff8ee652 at suse.com>:
>>>
>>>> Could be the naming of lvmlockd and virtlockd mislead you, I guess.
>>>
>>> I agree that there is one "virtlockd" name in the resources that refers to
>> lvmlockd. That is confusing, I agree.
>>> But: Isn't virtlockd trying to lock the VM images used? Those are located on
>> a different OCFS2 filesystem here.
>>
>> Right. virtlockd works together with libvirt for Virtual Machines locking.
>>
>>> And I thought virtlockd is using lvmlockd to lock those images. Maybe I'm
>> just confused.
>>> Even after reading the manual page of virtlockd I could not find out how it
>> actually does perform locking.
>>>
>>> lsof suggests it used files like this:
>>>
>> /var/lib/libvirt/lockd/files/f9d587c61002c7480f8b86116eb4f7dfa210e52af7e94476
>> 2f58c2c2f89a6865
>>
>> This file lock indicates the VM backing file is a qemu image. In case the VM
>>
>> backing storage is SCSI or LVM, the directory structure will change
>>
>> /var/lib/libvirt/lockd/scsi
>> /var/lib/libvirt/lockd/lvm
>>
>> Some years ago, there was a draft patch set sent to libvirt community to add
>>
>> the alternative to let virtlockd use the DLM lock, hence no need the
>> filesystem(nfs, ocfs2, or gfs2(?) ) for "/var/lib/libvirt/lockd". Well, the
>> libvirt community was less motivated to move it on.
>>
>>>
>>> That filesystem is OCFS:
>>> h18:~ # df /var/lib/libvirt/lockd/files
>>> Filesystem     1K-blocks  Used Available Use% Mounted on
>>> /dev/md10         261120 99120    162000  38% /var/lib/libvirt/lockd
>>>
>>>
>>> Could part of the problem be that systemd controls virtlockd, but the
>> filesystem it needs is controlled by the cluster?
>>>
>>> Do I have to mess with those systemd resources in the cluster?:
>>> systemd:virtlockd               systemd:virtlockd-admin.socket
>> systemd:virtlockd.socket
>>>
>>
>> It would be more complete and solid cluster configuration if doing so.
>> Though,
>> I think it could work to let libvirtd and virtlockd running out side of the
>> cluster stack as long as the whole system is not too complex to manage.
>> Anyway,
>> testing could tell.
> 
> Hi!
> 
> So basically I have one question: Does the virtlockd need a cluster-wide filesystem?
> When ruinning on a single node (the usual case assumed in the docs) a local filesystem will do, but how would virtlockd prevent a VM using a shared filesystem or disk prevent a VM from starting on two different nodes?

The libvirt community guides users to use NFS in this case. We, the cluster 
community, could have fun with the cluster filesystem ;)

Cheers,
Roger


> Unfortunately I had exactly that before deploying the virtlockd configuration, and the filesystem for the VM is damaged to a degree that made it unrecoverable.
> 
> Regards,
> Ulrich
> 
>>
>> BR,
>> Roger
>>
>>
>>>>
>>>> Anyway, two more tweaks needed in your CIB:
>>>>
>>>> colocation col_vm__virtlockd inf: ( prm_xen_test-jeos1 prm_xen_test-jeos2
>>>> prm_xen_test-jeos3 prm_xen_test-jeos4 ) cln_lockspace_ocfs2
>>>>
>>>> order ord_virtlockd__vm Mandatory: cln_lockspace_ocfs2 ( prm_xen_test-jeos1
>>>> prm_xen_test-jeos2 prm_xen_test-jeos3 prm_xen_test-jeos4 )
>>>
>>> I'm still trying to understand all that. Thanks for helping so far.
>>>
>>> Regards,
>>> Ulrich
>>>
>>>>
>>>>
>>>> BR,
>>>> Roger
>>>
>>>
>>>
>>>
> 
> 
> 
> 



More information about the Users mailing list