[ClusterLabs] Multiple nfsserver resource groups
Strahil Nikolov
hunter86_bg at yahoo.com
Sat Mar 7 09:45:47 EST 2020
On March 7, 2020 1:03:53 PM GMT+02:00, Christoforos Christoforou <christoforos at globalreach.com> wrote:
>That was my thought. I'll take the safe route and set it up that way.
>I'll also give the LVM resource a go as well.
>
>Thank you Strahil.
>Regards,
>Chris
>
>-----Original Message-----
>From: Strahil Nikolov <hunter86_bg at yahoo.com>
>Sent: Saturday, March 7, 2020 12:10 AM
>To: christoforos at globalreach.com; 'Cluster Labs - All topics related to
>open-source clustering welcomed' <users at clusterlabs.org>
>Subject: RE: [ClusterLabs] Multiple nfsserver resource groups
>
>On March 6, 2020 11:58:33 PM GMT+02:00, Christoforos Christoforou
><christoforos at globalreach.com> wrote:
>>>I don't get it.
>>>Why the last nfs_shared_infodir will be mounted on /var/lib/nfs ?
>>>As far as I remember, you set that dir in any location you want as a
>>Filesystem resource.
>>The dir you set in the resource options is indeed the dir used, but
>>apparently the nfsserver daemon will mount that dir in /var/lib/nfs. I
>
>>believe it's the last one because it's the last instance of the
>>nfsdaemon that starts, so it "steals" the /var/lib/nfs mount from the
>>other 2 that started before.
>>
>>From the ocf_heartbeat_nfsserver manpage (
>>https://manpages.debian.org/testing/resource-agents/ocf_heartbeat_nfsse
>>rver.7.en.html
>>) :
>>"nfs_shared_infodir
>>The nfsserver resource agent will save nfs related information in this
>
>>specific directory. And this directory must be able to fail-over
>before
>>nfsserver itself.
>>(optional, string, no default)
>>rpcpipefs_dir
>>The mount point for the sunrpc file system. Default is
>>/var/lib/nfs/rpc_pipefs. This script will mount (bind)
>>nfs_shared_infodir on /var/lib/nfs/ (cannot be changed), and this
>>script will mount the sunrpc file system on /var/lib/nfs/rpc_pipefs
>>(default, can be changed by this parameter). If you want to move only
>>rpc_pipefs/ (e.g. to keep rpc_pipefs/ local) from default, please set
>>this value.
>>(optional, string, default "/var/lib/nfs/rpc_pipefs")"
>>
>>>Note: You do realize that mpathi on 1 host can be turned into mpathZ
>>on the other. Either use alias for each wwid, or do not use friendly
>>names on those cluster nodes.
>>I know, wwid bindings are set to be identical on both nodes, so it's
>>not an issue.
>>>Also, I would use LVM resource with 'exclusive=true' in order to
>avoid
>>accidental activation of the LV on the other node.
>>I remember trying out LVM resource when I was doing initial testing
>but
>>that was 2 years ago, so I can't remember why I went with a filesystem
>
>>resource instead.
>>I will revisit that, thanks.
>>
>>Any other insight on the nfs_shared_infodir situation?
>>
>>-----Original Message-----
>>From: Strahil Nikolov <hunter86_bg at yahoo.com>
>>Sent: Friday, March 6, 2020 11:20 PM
>>To: christoforos at globalreach.com; Cluster Labs - All topics related to
>
>>open-source clustering welcomed <users at clusterlabs.org>
>>Subject: Re: [ClusterLabs] Multiple nfsserver resource groups
>>
>>On March 6, 2020 7:56:00 PM GMT+02:00, Christoforos Christoforou
>><christoforos at globalreach.com> wrote:
>>>Hello,
>>>
>>>
>>>
>>>We have a PCS cluster running on 2 CentOS 7 nodes, exposing 2 NFSv3
>>>volumes which are then mounted to multiple servers (around 8).
>>>
>>>We want to have 2 more sets of additional shared NFS volumes, for a
>>>total of 6.
>>>
>>>
>>>
>>>I have successfully configured 3 resource groups, with each group
>>>having the following resources:
>>>
>>>* 1x ocf_heartbeat_IPaddr2 resource for the Virtual IP that exposes
>>>the NFS share assigned to its own NIC.
>>>* 3x ocf_heartbeat_Filesystem resources (1 is for the
>>>nfs_shared_infodir and the other 2 are the ones exposed via the NFS
>>>server)
>>>* 1x ocf_heartbeat_nfsserver resource that uses the aforementioned
>>>nfs_shared_infodir.
>>>* 2x ocf_heartbeat_exportfs resources that expose the other 2
>>>filesystems as NFS shares.
>>>* 1x ocf_heartbeat_nfsnotify resource that has the Virtual IP set as
>>>its own source_host.
>>>
>>>
>>>
>>>All 9 filesystem volumes are mounted via iSCSI to the PCS nodes in
>>>/dev/mapper/mpathX
>>>
>>>So the structure is like so:
>>>
>>>Resource group 1:
>>>
>>>* /dev/mapper/mpatha - shared volume 1
>>>* /dev/mapper/mpathb - shared volume 2
>>>* /dev/mapper/mpathc - nfs_shared_infodir for resource group 1
>>>
>>>Resource group 2:
>>>
>>>* /dev/mapper/mpathd - shared volume 3
>>>* /dev/mapper/mpathe - shared volume 4
>>>* /dev/mapper/mpathf - nfs_shared_infodir for resource group 2
>>>
>>>Resource group 3:
>>>
>>>* /dev/mapper/mpathg - shared volume 5
>>>* /dev/mapper/mpathh - shared volume 6
>>>* /dev/mapper/mpathi - nfs_shared_infodir for resource group 3
>>>
>>>
>>>
>>>My concern is that when I run a df command on the active node, the
>>last
>>>ocf_heartbeat_nfsserver volume (/dev/mapper/mpathi) mounted to
>>>/var/lib/nfs.
>>>I understand that I cannot change this, but I can change the location
>
>>>of the rpc_pipefs folder.
>>>
>>>
>>>
>>>I have had this setup running with 2 resource groups in our
>>development
>>>environment, and have not noticed any issues, but since we're
>planning
>>
>>>to move to production and add a 3rd resource group, I want to make
>>sure
>>>that this setup will not cause any issues. I am by no means an expert
>
>>>on NFS, so some insight is appreciated.
>>>
>>>
>>>
>>>If this kind of setup is not supported or recommended, I have 2
>>>alternate plans in mind:
>>>
>>>1. Have all resources in the same resource group, in a setup that
>will
>>>look like this:
>>>
>>>a. 1x ocf_heartbeat_IPaddr2 resource for the Virtual IP that exposes
>>>the NFS share.
>>>b. 7x ocf_heartbeat_Filesystem resources (1 is for the
>>>nfs_shared_infodir and 6 exposed via the NFS server)
>>>c. 1x ocf_heartbeat_nfsserver resource that uses the aforementioned
>>>nfs_shared_infodir.
>>>d. 6x ocf_heartbeat_exportfs resources that expose the other 6
>>>filesystems as NFS shares. Use the clientspec option to restrict to
>>IPs
>>>and prevent unwanted mounts.
>>>e. 1x ocf_heartbeat_nfsnotify resource that has the Virtual IP set as
>>>its own source_host.
>>>
>>>2. Setup 2 more clusters to accommodate our needs
>>>
>>>
>>>
>>>I really want to avoid #2, due to the fact that it will be overkill
>>for
>>>our case.
>>>
>>>Thanks
>>>
>>>
>>>
>>>Christoforos Christoforou
>>>
>>>Senior Systems Administrator
>>>
>>>Global Reach Internet Productions
>>>
>>> <http://www.twitter.com/globalreach> Twitter |
>>><http://www.facebook.com/globalreach> Facebook |
>>><https://www.linkedin.com/company/global-reach-internet-productions>
>>>LinkedIn
>>>
>>>p (515) 996-0996 | <http://www.globalreach.com/> globalreach.com
>>>
>>>
>>
>>I don't get it.
>>Why the last nfs_shared_infodir will be mounted on /var/lib/nfs ?
>>As far as I remember, you set that dir in any location you want as a
>>Filesystem resource.
>>
>>Note: You do realize that mpathi on 1 host can be turned into mpathZ
>>on the other. Either use alias for each wwid, or do not use friendly
>>names on those cluster nodes.
>>Also, I would use LVM resource with 'exclusive=true' in order to avoid
>
>>accidental activation of the LV on the other node.
>>
>>Best Regards,
>>Strahil Nikolov
>
>Hm...
>I never noticed that behaviour, but it make sense.
>When you don't use a cluster - there is only 1 NFS server on the
>system and everything else is controlled via the exports.
>
>I guess you will need to try to combine all 3 groups into a single
>resource group with 1 'nfssserver' and multiple exports .
>
>
>Best Regards,
>Strahil Nikolov
Ha NFS is described here : https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/ch-nfsserver-haaa
The trick for halvm is to:
Run : lvmconf --enable-halvm --services --startstopservices
Set volume_list in /etc/lvm/lvm.conf
That setting defines what local VGs/LVs will be activated , so define all local (for the host ) Volume Groups and DO NOT define the NFS VGs.
Note : If your NFS LVs are part of the system VG, then either use vgsplit to move them in another VG or in volume_list you have to define all LVs that has to be activated and leave the NFS LVs undefined.
To verify for mistakes in /etc/lvm/lvm.conf use : 'pvs' , 'vgs' & 'lvs' commands. They do complain for errors and point to the line with issues.
Last (very important) is to rebuild the initramfs via: 'dracut -f' and reboot to test if the NFS LVs will be active ('a' letter in 'lvs' command output) . All NFS lvs must be without 'a' flag or you have an error in lvm.conf
Next use the LVM cluster resource (I use LVM) with 'exclusive=true'.
Then check all nodes. The node with the resource will have an active LV , while on the other nodes will remain inactive.
Final check is 'vgs -o +tags' on the nodes.
Best Regards,
Strahil Nikolov
More information about the Users
mailing list