[ClusterLabs] Rhel 7.2 pacemaker Cluster - gfs2 over nfs
Dawood Munavar S M
dawood.m at msystechnologies.com
Mon May 1 10:16:29 EDT 2017
Hello,
Could you please share any links/steps to create NFS HA cluster over gfs2
file system using Pacemaker. Currently I have completed till mounting of
gfs2 file systems on cluster nodes and now I need to create cluster
resources for NFS server, exports and mount on client.
I followed some RedHat forums and created NFS cluster resources but still
"showmount -e" doesn't list the export entries,
*[root at node1-emulex ~]# pcs status*
Cluster name: mycluster
Stack: corosync
Current DC: node2-atto (version 1.1.15-11.el7_3.4-e174ec8) - partition with
quorum
Last updated: Mon May 1 09:55:47 2017 Last change: Mon May 1
07:50:25 2017 by root via cibadmin on node1-emulex
2 nodes and 10 resources configured
Online: [ node1-emulex node2-atto ]
Full list of resources:
scsi (stonith:fence_scsi): Started node2-atto
Clone Set: dlm-clone [dlm]
Started: [ node1-emulex node2-atto ]
Clone Set: clvmd-clone [clvmd]
Started: [ node1-emulex node2-atto ]
Clone Set: clusterfs-clone [clusterfs]
Started: [ node1-emulex node2-atto ]
ClusterIP (ocf::heartbeat:IPaddr2): Started node1-emulex
NFS-D (ocf::heartbeat:nfsserver): Started node1-emulex
nfs-cm-shared (ocf::heartbeat:exportfs): Started node2-atto
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
[root at node1-emulex ~]#
[root at node1-emulex ~]# showmount -e 172.30.59.253
Export list for 172.30.59.253:
[root at node1-emulex ~]#
*Note:* Below steps are followed to create nfs resources after mounting
gfs2 file systems on cluster nodes
1. pcs resource create ClusterIP ocf:heartbeat:IPaddr2 ip=172.30.59.253
cidr_netmask=19 op monitor interval=30s
2. pcs resource create NFS-D nfsserver
nfs_shared_infodir=/mnt/pacemaker/nfsinfo
nfs_ip=172.30.59.253
3. pcs resource create nfs-cm-shared exportfs clientspec=172.30.59.254/255.2
55.224.0 <http://192.168.2.0/255.255.255.0> options=rw,sync,no_root_squash
directory=/mnt/pacemaker/exports fsid=0
4. Added resource dependancies
5. [root at node2-atto ~]# showmount -e 172.30.59.253
Export list for 172.30.59.253 <http://192.168.2.90>:
**** No Entries *****
*[root at node1-emulex ~]# pcs status resources clusterfs-clone*
Clone: clusterfs-clone
Meta Attrs: interleave=true
Resource: clusterfs (class=ocf provider=heartbeat type=Filesystem)
Attributes: device=/dev/volgroup/vol directory=/mnt/pacemaker/
fstype=gfs2 options=noatime,localflocks
Operations: start interval=0s timeout=60 (clusterfs-start-interval-0s)
stop interval=0s timeout=60 (clusterfs-stop-interval-0s)
monitor interval=10s on-fail=fence
(clusterfs-monitor-interval-10s)
[root at node1-emulex ~]#
*[root at node1-emulex ~]# pcs status resources ClusterIP*
Resource: ClusterIP (class=ocf provider=heartbeat type=IPaddr2)
Attributes: ip=172.30.59.253 cidr_netmask=19
Operations: start interval=0s timeout=20s (ClusterIP-start-interval-0s)
stop interval=0s timeout=20s (ClusterIP-stop-interval-0s)
monitor interval=30s (ClusterIP-monitor-interval-30s)
[root at node1-emulex ~]#
*[root at node1-emulex ~]# pcs status resources NFS-D*
Resource: NFS-D (class=ocf provider=heartbeat type=nfsserver)
Attributes: nfs_shared_infodir=/mnt/pacemaker/nfsinfo nfs_ip=172.30.59.253
Operations: start interval=0s timeout=40 (NFS-D-start-interval-0s)
stop interval=0s timeout=20s (NFS-D-stop-interval-0s)
monitor interval=10 timeout=20s (NFS-D-monitor-interval-10)
[root at node1-emulex ~]#
*[root at node1-emulex ~]# pcs status resources nfs-cm-shared*
Resource: nfs-cm-shared (class=ocf provider=heartbeat type=exportfs)
Attributes: clientspec=172.30.59.254/255.255.224.0
options=rw,sync,no_root_squash directory=/mnt/pacemaker/exports fsid=0
Operations: start interval=0s timeout=40 (nfs-cm-shared-start-interval-0s)
stop interval=0s timeout=120 (nfs-cm-shared-stop-interval-0s)
monitor interval=10 timeout=20 (nfs-cm-shared-monitor-
interval-10)
*Query:* I went through RedHat forums and it is mentioned that Exporting a
GFS2 filesystem in an Active/Active configuration is only supported when
using *Samba+CTDB* to export the GFS2 filesystem. Please let us know if its
mandatory to configure CTDB when nfs over gfs2 is configured or anyother
options is available.
Thanks,
Munavar.
<http://www.avg.com/email-signature?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
Virus-free.
www.avg.com
<http://www.avg.com/email-signature?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
--
DISCLAIMER
The information in this e-mail is confidential and may be subject to legal
privilege. It is intended solely for the addressee. Access to this e-mail
by anyone else is unauthorized. If you have received this communication in
error, please address with the subject heading "Received in error," send to
it at msystechnologies.com, then delete the e-mail and destroy any copies of
it. If you are not the intended recipient, any disclosure, copying,
distribution or any action taken or omitted to be taken in reliance on it,
is prohibited and may be unlawful. The views, opinions, conclusions and
other information expressed in this electronic mail and any attachments are
not given or endorsed by the company unless otherwise indicated by an
authorized representative independent of this message.
MSys cannot guarantee that e-mail communications are secure or error-free,
as information could be intercepted, corrupted, amended, lost, destroyed,
arrive late or incomplete, or contain viruses, though all reasonable
precautions have been taken to ensure no viruses are present in this e-mail.
As our company cannot accept responsibility for any loss or damage arising
from the use of this e-mail or attachments we recommend that you subject
these to your virus checking procedures prior to use
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.clusterlabs.org/pipermail/users/attachments/20170501/7999dc86/attachment-0002.html>
More information about the Users
mailing list