[ClusterLabs] Lost access with the volume while ZFS and other resources migrate to other node (reset VM)
Александр
gitarist_93 at list.ru
Sun Apr 2 19:47:58 EDT 2023
P acemaker + corosync cluster with 2 virtual machines (ubuntu 22.04, 16 Gb RAM, 8 CPU each) are assembled into a cluster, an HBA is forwarded to each of them to connect to a disk shelf according to the instructions https://netbergtw.com/top-support/articles/zfs-cib /. A ZFS pool was assembled from 4 disks in draid1, resources were configured - virtual IP, iSCSITarget, iSCSILun. LUN connected in VMware. During an abnormal shutdown of the node, resources move, but at the moment this happens, VMware loses contact with the LUN, which should not happen. The journalctl log at the time of the move is here: https://pastebin.com/eLj8DdtY . I also tried to build a common storage on drbd with cloned VIP and Target resources, but this also does not work, besides, every time I move, there are always some problems with the start of resources. Any ideas what can be done about this? Loss of communication with the LUN even for a couple of seconds is already critical.
corosync-qdevice/jammy,now 3.0.1-1 amd64 [installed]
corosync-qnetd/jammy,now 3.0.1-1 amd64 [installed]
corosync/jammy,now 3.1.6-1ubuntu1 amd64 [installed]
pacemaker-cli-utils/jammy,now 2.1.2-1ubuntu3 amd64 [installed,automatic]
pacemaker-common/jammy,now 2.1.2-1ubuntu3 all [installed,automatic]
pacemaker-resource-agents/jammy,now 2.1.2-1ubuntu3 all [installed,automatic]
pacemaker/jammy,now 2.1.2-1ubuntu3 amd64 [installed]
pcs/jammy,now 0.10.11-2ubuntu3 all [installed]
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20230403/47ea3884/attachment.htm>
More information about the Users
mailing list