[ClusterLabs] Fwd: Cluster - NFS Share Configuration

Ken Gaillot kgaillot at redhat.com
Thu Jul 6 14:03:18 UTC 2017


On 07/06/2017 07:24 AM, pradeep s wrote:
> Team,
> 
> I am working on configuring cluster environment for NFS share using
> pacemaker. Below are the resources I have configured.
> 
> Quote:
> Group: nfsgroup
> Resource: my_lvm (class=ocf provider=heartbeat type=LVM)
> Attributes: volgrpname=my_vg exclusive=true
> Operations: start interval=0s timeout=30 (my_lvm-start-interval-0s)
> stop interval=0s timeout=30 (my_lvm-stop-interval-0s)
> monitor interval=10 timeout=30 (my_lvm-monitor-interval-10)
> Resource: nfsshare (class=ocf provider=heartbeat type=Filesystem)
> Attributes: device=/dev/my_vg/my_lv directory=/nfsshare fstype=ext4
> Operations: start interval=0s timeout=60 (nfsshare-start-interval-0s)
> stop interval=0s timeout=60 (nfsshare-stop-interval-0s)
> monitor interval=20 timeout=40 (nfsshare-monitor-interval-20)
> Resource: nfs-daemon (class=ocf provider=heartbeat type=nfsserver)
> Attributes: nfs_shared_infodir=/nfsshare/nfsinfo nfs_no_notify=true
> Operations: start interval=0s timeout=40 (nfs-daemon-start-interval-0s)
> stop interval=0s timeout=20s (nfs-daemon-stop-interval-0s)
> monitor interval=10 timeout=20s (nfs-daemon-monitor-interval-10)
> Resource: nfs-root (class=ocf provider=heartbeat type=exportfs)
> Attributes: clientspec=10.199.1.0/255.255.255.0
> <http://10.199.1.0/255.255.255.0> options=rw,sync,no_root_squash
> directory=/nfsshare/exports fsid=0
> Operations: start interval=0s timeout=40 (nfs-root-start-interval-0s)
> stop interval=0s timeout=120 (nfs-root-stop-interval-0s)
> monitor interval=10 timeout=20 (nfs-root-monitor-interval-10)
> Resource: nfs-export1 (class=ocf provider=heartbeat type=exportfs)
> Attributes: clientspec=10.199.1.0/255.255.255.0
> <http://10.199.1.0/255.255.255.0> options=rw,sync,no_root_squash
> directory=/nfsshare/exports/export1 fsid=1
> Operations: start interval=0s timeout=40 (nfs-export1-start-interval-0s)
> stop interval=0s timeout=120 (nfs-export1-stop-interval-0s)
> monitor interval=10 timeout=20 (nfs-export1-monitor-interval-10)
> Resource: nfs-export2 (class=ocf provider=heartbeat type=exportfs)
> Attributes: clientspec=10.199.1.0/255.255.255.0
> <http://10.199.1.0/255.255.255.0> options=rw,sync,no_root_squash
> directory=/nfsshare/exports/export2 fsid=2
> Operations: start interval=0s timeout=40 (nfs-export2-start-interval-0s)
> stop interval=0s timeout=120 (nfs-export2-stop-interval-0s)
> monitor interval=10 timeout=20 (nfs-export2-monitor-interval-10)
> Resource: nfs_ip (class=ocf provider=heartbeat type=IPaddr2)
> Attributes: ip=10.199.1.86 cidr_netmask=24
> Operations: start interval=0s timeout=20s (nfs_ip-start-interval-0s)
> stop interval=0s timeout=20s (nfs_ip-stop-interval-0s)
> monitor interval=10s timeout=20s (nfs_ip-monitor-interval-10s)
> Resource: nfs-notify (class=ocf provider=heartbeat type=nfsnotify)
> Attributes: source_host=10.199.1.86
> Operations: start interval=0s timeout=90 (nfs-notify-start-interval-0s)
> stop interval=0s timeout=90 (nfs-notify-stop-interval-0s)
> monitor interval=30 timeout=90 (nfs-notify-monitor-interval-30)
> 
> 
> PCS Status
> Quote:
> Cluster name: my_cluster
> Stack: corosync
> Current DC: node3.cluster.com <http://node3.cluster.com> (version
> 1.1.15-11.el7_3.5-e174ec8) - partition with quorum
> Last updated: Wed Jul 5 13:12:48 2017 Last change: Wed Jul 5 13:11:52
> 2017 by root via crm_attribute on node3.cluster.com
> <http://node3.cluster.com>
> 
> 2 nodes and 10 resources configured
> 
> Online: [ node3.cluster.com <http://node3.cluster.com> node4.cluster.com
> <http://node4.cluster.com> ]
> 
> Full list of resources:
> 
> fence-3 (stonith:fence_vmware_soap): Started node4.cluster.com
> <http://node4.cluster.com>
> fence-4 (stonith:fence_vmware_soap): Started node3.cluster.com
> <http://node3.cluster.com>
> Resource Group: nfsgroup
> my_lvm (ocf::heartbeat:LVM): Started node3.cluster.com
> <http://node3.cluster.com>
> nfsshare (ocf::heartbeat:Filesystem): Started node3.cluster.com
> <http://node3.cluster.com>
> nfs-daemon (ocf::heartbeat:nfsserver): Started node3.cluster.com
> <http://node3.cluster.com>
> nfs-root (ocf::heartbeat:exportfs): Started node3.cluster.com
> <http://node3.cluster.com>
> nfs-export1 (ocf::heartbeat:exportfs): Started node3.cluster.com
> <http://node3.cluster.com>
> nfs-export2 (ocf::heartbeat:exportfs): Started node3.cluster.com
> <http://node3.cluster.com>
> nfs_ip (ocf::heartbeat:IPaddr2): Started node3.cluster.com
> <http://node3.cluster.com>
> nfs-notify (ocf::heartbeat:nfsnotify): Started node3.cluster.com
> <http://node3.cluster.com>
> 
> I followedthe redhat link
> <https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/High_Availability_Add-On_Administration/ch-nfsserver-HAAA.html#s1-nfsclustcreate-HAAA>
> to configure.
> 
> Once configured, I could mount the directory from nfs client with no
> issues. However, wehn I try entering the node to standby, the resources
> not starting up in secondary node.
> 
> After entering active node to standby,
> 
> Quote:
> [root at node3 ~]# pcs status
> Cluster name: my_cluster
> Stack: corosync
> Current DC: node3.cluster.com <http://node3.cluster.com> (version
> 1.1.15-11.el7_3.5-e174ec8) - partition with quorum
> Last updated: Wed Jul 5 13:16:05 2017 Last change: Wed Jul 5 13:15:38
> 2017 by root via crm_attribute on node3.cluster.com
> <http://node3.cluster.com>
> 
> 2 nodes and 10 resources configured
> 
> Node node3.cluster.com <http://node3.cluster.com>: standby
> Online: [ node4.cluster.com <http://node4.cluster.com> ]
> 
> Full list of resources:
> 
> fence-3 (stonith:fence_vmware_soap): Started node4.cluster.com
> <http://node4.cluster.com>
> fence-4 (stonith:fence_vmware_soap): Started node4.cluster.com
> <http://node4.cluster.com>
> Resource Group: nfsgroup
> my_lvm (ocf::heartbeat:LVM): Stopped
> nfsshare (ocf::heartbeat:Filesystem): Stopped
> nfs-daemon (ocf::heartbeat:nfsserver): Stopped
> nfs-root (ocf::heartbeat:exportfs): Stopped
> nfs-export1 (ocf::heartbeat:exportfs): Stopped
> nfs-export2 (ocf::heartbeat:exportfs): Stopped
> nfs_ip (ocf::heartbeat:IPaddr2): Stopped
> nfs-notify (ocf::heartbeat:nfsnotify): Stopped
> 
> Failed Actions:
> * fence-3_monitor_60000 on node4.cluster.com <http://node4.cluster.com>
> 'unknown error' (1): call=50, status=Timed Out, exitreason='none',
> last-rc-change='Wed Jul 5 13:11:54 2017', queued=0ms, exec=20012ms
> * fence-4_monitor_60000 on node4.cluster.com <http://node4.cluster.com>
> 'unknown error' (1): call=47, status=Timed Out, exitreason='none',
> last-rc-change='Wed Jul 5 13:05:32 2017', queued=0ms, exec=20028ms
> * my_lvm_start_0 on node4.cluster.com <http://node4.cluster.com>
> 'unknown error' (1): call=49, status=complete, exitreason='Volume group
> [my_vg] does not exist or contains error! Volume group "my_vg" not found',
> last-rc-change='Wed Jul 5 13:05:39 2017', queued=0ms, exec=1447ms
> 
> 
> Daemon Status:
> corosync: active/enabled
> pacemaker: active/enabled
> pcsd: active/enabled
> 
> 
> 
> I am seeing this error,
> Quote:
> ERROR: Volume group [my_vg] does not exist or contains error! Volume
> group "my_vg" not found#012 Cannot process volume group my_vg
> 
> This resolves when I create the lvm manually on secondary but I expect
> the resources to do the job. Am I missing something in this configuration?
> 
> -- 
> Regards,
> Pradeep Anandh

The resource agents manage activating and deactivating the VGs/LVs, but
they must be created outside the cluster beforehand. Depending on your
needs, you may also need to run clvmd (which the cluster can manage) and
create the volumes with the clustered option.




More information about the Users mailing list