[ClusterLabs] Multiple nfsserver resource groups

Christoforos Christoforou christoforos at globalreach.com
Fri Mar 6 16:58:33 EST 2020


>I don't get it.
>Why the last nfs_shared_infodir  will be mounted  on /var/lib/nfs  ?
>As  far as I remember, you set that dir in any location you want as a Filesystem resource. 
The dir you set in the resource options is indeed the dir used, but apparently the nfsserver daemon will mount that dir in /var/lib/nfs. I believe it's the last one because it's the last instance of the nfsdaemon that starts, so it "steals" the /var/lib/nfs mount from the other 2 that started before.

>From the ocf_heartbeat_nfsserver manpage ( https://manpages.debian.org/testing/resource-agents/ocf_heartbeat_nfsserver.7.en.html ) :
"nfs_shared_infodir
The nfsserver resource agent will save nfs related information in this specific directory. And this directory must be able to fail-over before nfsserver itself.
(optional, string, no default)
rpcpipefs_dir
The mount point for the sunrpc file system. Default is /var/lib/nfs/rpc_pipefs. This script will mount (bind) nfs_shared_infodir on /var/lib/nfs/ (cannot be changed), and this script will mount the sunrpc file system on /var/lib/nfs/rpc_pipefs (default, can be changed by this parameter). If you want to move only rpc_pipefs/ (e.g. to keep rpc_pipefs/ local) from default, please set this value.
(optional, string, default "/var/lib/nfs/rpc_pipefs")"

>Note: You do realize  that mpathi on 1 host can be turned into mpathZ on the other. Either use alias for each wwid, or do not use  friendly names  on those cluster nodes.
I know, wwid bindings are set to be identical on both nodes, so it's not an issue.
>Also, I would use LVM resource with 'exclusive=true' in order to avoid accidental activation of the LV on the other node.
I remember trying out LVM resource when I was doing initial testing but that was 2 years ago, so I can't remember why I went with a filesystem resource instead.
I will revisit that, thanks.

Any other insight on the nfs_shared_infodir situation?

-----Original Message-----
From: Strahil Nikolov <hunter86_bg at yahoo.com> 
Sent: Friday, March 6, 2020 11:20 PM
To: christoforos at globalreach.com; Cluster Labs - All topics related to open-source clustering welcomed <users at clusterlabs.org>
Subject: Re: [ClusterLabs] Multiple nfsserver resource groups

On March 6, 2020 7:56:00 PM GMT+02:00, Christoforos Christoforou <christoforos at globalreach.com> wrote:
>Hello,
>
> 
>
>We have a PCS cluster running on 2 CentOS 7 nodes, exposing 2 NFSv3 
>volumes which are then mounted to multiple servers (around 8).
>
>We want to have 2 more sets of additional shared NFS volumes, for a 
>total of 6.
>
> 
>
>I have successfully configured 3 resource groups, with each group 
>having the following resources:
>
>*	1x ocf_heartbeat_IPaddr2 resource for the Virtual IP that exposes
>the NFS share assigned to its own NIC.
>*	3x ocf_heartbeat_Filesystem resources (1 is for the
>nfs_shared_infodir and the other 2 are the ones exposed via the NFS
>server)
>*	1x ocf_heartbeat_nfsserver resource that uses the aforementioned
>nfs_shared_infodir.
>*	2x ocf_heartbeat_exportfs resources that expose the other 2
>filesystems as NFS shares.
>*	1x ocf_heartbeat_nfsnotify resource that has the Virtual IP set as
>its own source_host.
>
> 
>
>All 9 filesystem volumes are mounted via iSCSI to the PCS nodes in 
>/dev/mapper/mpathX
>
>So the structure is like so:
>
>Resource group 1:
>
>*	/dev/mapper/mpatha - shared volume 1
>*	/dev/mapper/mpathb - shared volume 2
>*	/dev/mapper/mpathc - nfs_shared_infodir for resource group 1
>
>Resource group 2:
>
>*	/dev/mapper/mpathd - shared volume 3
>*	/dev/mapper/mpathe - shared volume 4
>*	/dev/mapper/mpathf - nfs_shared_infodir for resource group 2
>
>Resource group 3:
>
>*	/dev/mapper/mpathg - shared volume 5
>*	/dev/mapper/mpathh - shared volume 6
>*	/dev/mapper/mpathi - nfs_shared_infodir for resource group 3
>
> 
>
>My concern is that when I run a df command on the active node, the last 
>ocf_heartbeat_nfsserver volume (/dev/mapper/mpathi) mounted to 
>/var/lib/nfs.
>I understand that I cannot change this, but I can change the location 
>of the rpc_pipefs folder.
>
> 
>
>I have had this setup running with 2 resource groups in our development 
>environment, and have not noticed any issues, but since we're planning 
>to move to production and add a 3rd resource group, I want to make sure 
>that this setup will not cause any issues. I am by no means an expert 
>on NFS, so some insight is appreciated.
>
> 
>
>If this kind of setup is not supported or recommended, I have 2 
>alternate plans in mind:
>
>1.	Have all resources in the same resource group, in a setup that will
>look like this:
>
>a.	1x ocf_heartbeat_IPaddr2 resource for the Virtual IP that exposes
>the NFS share.
>b.	7x ocf_heartbeat_Filesystem resources (1 is for the
>nfs_shared_infodir and 6 exposed via the NFS server)
>c.	1x ocf_heartbeat_nfsserver resource that uses the aforementioned
>nfs_shared_infodir.
>d.	6x ocf_heartbeat_exportfs resources that expose the other 6
>filesystems as NFS shares. Use the clientspec option to restrict to IPs 
>and prevent unwanted mounts.
>e.	1x ocf_heartbeat_nfsnotify resource that has the Virtual IP set as
>its own source_host.
>
>2.	Setup 2 more clusters to accommodate our needs
>
> 
>
>I really want to avoid #2, due to the fact that it will be overkill for 
>our case.
>
>Thanks
>
> 
>
>Christoforos Christoforou
>
>Senior Systems Administrator
>
>Global Reach Internet Productions
>
> <http://www.twitter.com/globalreach> Twitter | 
><http://www.facebook.com/globalreach> Facebook | 
><https://www.linkedin.com/company/global-reach-internet-productions>
>LinkedIn
>
>p (515) 996-0996 |  <http://www.globalreach.com/> globalreach.com
>
> 

I don't get it.
Why the last nfs_shared_infodir  will be mounted  on /var/lib/nfs  ?
As  far as I remember, you set that dir in any location you want as a Filesystem resource.

Note: You do realize  that mpathi on 1 host can be turned into mpathZ on the other. Either use alias for each wwid, or do not use  friendly names  on those cluster nodes.
Also, I would use LVM resource with 'exclusive=true' in order to avoid accidental activation of the LV on the other node.

Best Regards,
Strahil Nikolov






More information about the Users mailing list