[ClusterLabs] Antw: ocf:lvm2:VolumeGroup Probe Issue
Ulrich Windl
Ulrich.Windl at rz.uni-regensburg.de
Wed Nov 9 08:48:39 CET 2016
>>> Marc Smith <marc.smith at mcc.edu> schrieb am 08.11.2016 um 17:37 in Nachricht
<CAHkw+LezcoryUCeuzPAuW6bwx+-e-352Bx3G9q-=FPTNAQxVVg at mail.gmail.com>:
> Hi,
>
> First, I realize ocf:lvm2:VolumeGroup comes from the LVM2 package and
> not resource-agents, but I'm hoping someone on this list is familiar
> with this RA and can provide some insight.
>
> In my cluster configuration, I'm using ocf:lvm2:VolumeGroup to manage
> my LVM VG's, and I'm using the cluster to manage DLM and CLVM. I have
> my constraints in place and everything seems to be working mostly,
> except I'm hitting a glitch with ocf:lvm2:VolumeGroup and the initial
> probe operation.
>
> On startup, a probe operation (monitor) is issued for all of the
> resources, but ocf:lvm2:VolumeGroup is returning OCF_ERR_GENERIC in
> VolumeGroup_status() (via VolumeGroup_monitor()) since clvmd hasn't
> started yet... this line in VolumeGroup_status() is the trouble:
>
> VGOUT=`vgdisplay -v $OCF_RESKEY_volgrpname 2>&1` || exit $OCF_ERR_GENERIC
>
> When clvmd is not running, 'vgdisplay -v name' will always return
> something like this:
Hi!
We also use cLVM and VGs in SLES11 SP4, but we do not have that problem. Independent of that I think that vgsisplay, should simply say that any cVG is not active if clvmd is not up and running (vgchange -a y will fail then, of course, but it may give an appropriate error message). Maybe for vgdisplay the problem is the "-v". Unfortunately vgdisplay dos not have an "Active yes/no" status output (like HP-UX had).
I wonder whether a simpler solution is to check for a properly named symbolic link in /dev/mapper...
What I see here is easier:
LVM(prm_LVM_CFS_VMs)[4971]: WARNING: LVM Volume CFS_VMs is not available (stopped)
But we get a problem from the filesystem probe:
Filesystem(prm_CFS_VMs_fs)[5031]: WARNING: Couldn't find device [/dev/CFS_VMs/xen]. Expected /dev/??? to exist
Then start DLM, clvmd, ...
Ulrich
>
> --snip--
> connect() failed on local socket: No such file or directory
> Internal cluster locking initialisation failed.
> WARNING: Falling back to local file-based locking.
> Volume Groups with the clustered attribute will be inaccessible.
> VG name on command line not found in list of VGs: biggie
> Volume group "biggie" not found
> Cannot process volume group biggie
> --snip--
>
> And exits with a status of 5. So, my question is, do I patch the RA?
> Or is there some cluster constraint I can add so a probe/monitor
> operation isn't performed for the VolumeGroup resource until CLVM has
> been started?
>
> Any other advice? Is ocf:heartbeat:LVM or ocf:lvm2:VolumeGroup the
> more popular RA for managing LVM VG's? Any comments from other users
> on experiences using either (good, bad)? Both appear to achieve the
> same function, just a bit differently.
>
>
> Thanks,
>
> Marc
>
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
More information about the Users
mailing list