[Pacemaker] cannot mount gfs2 filesystem

Andrew Beekhof andrew at beekhof.net
Tue Oct 30 05:20:15 UTC 2012


On Mon, Oct 29, 2012 at 4:22 PM, Soni Maula Harriz
<soni.harriz at sangkuriang.co.id> wrote:
> dear all,
> i configure pacemaker and corosync on 2 Centos 6.3 servers by following
> instruction on 'Cluster from Scratch'.
> on the beginning, i follow 'Cluster from Scratch' edition 5. but, since i
> use centos, i change to 'Cluster from Scratch' edition 3 to configure
> active/active servers.
> Now on 1st server (cluster1), the Filesystem resource cannot start. the gfs2
> filesystem can't be mounted.
>
> this is the crm configuration
> [root at cluster2 ~]# crm configure show
> node cluster1 \
>     attributes standby="off"
> node cluster2 \
>     attributes standby="off"
> primitive ClusterIP ocf:heartbeat:IPaddr2 \
>     params ip="xxx.xxx.xxx.229" cidr_netmask="32" clusterip_hash="sourceip"
> \
>     op monitor interval="30s"
> primitive WebData ocf:linbit:drbd \
>     params drbd_resource="wwwdata" \
>     op monitor interval="60s"
> primitive WebFS ocf:heartbeat:Filesystem \
>     params device="/dev/drbd/by-res/wwwdata" directory="/var/www/html"
> fstype="gfs2"
> primitive WebSite ocf:heartbeat:apache \
>     params configfile="/etc/httpd/conf/httpd.conf"
> statusurl="http://localhost/server-status" \
>     op monitor interval="1min"
> ms WebDataClone WebData \
>     meta master-max="2" master-node-max="1" clone-max="2" clone-node-max="1"
> notify="true"
> clone WebFSClone WebFS
> clone WebIP ClusterIP \
>     meta globally-unique="true" clone-max="2" clone-node-max="1"
> interleave="false"
> clone WebSiteClone WebSite \
>     meta interleave="false"
> colocation WebSite-with-WebFS inf: WebSiteClone WebFSClone
> colocation colocation-WebSite-ClusterIP-INFINITY inf: WebSiteClone WebIP
> colocation fs_on_drbd inf: WebFSClone WebDataClone:Master
> order WebFS-after-WebData inf: WebDataClone:promote WebFSClone:start
> order WebSite-after-WebFS inf: WebFSClone WebSiteClone
> order order-ClusterIP-WebSite-mandatory : WebIP:start WebSiteClone:start
> property $id="cib-bootstrap-options" \
>     dc-version="1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14" \
>     cluster-infrastructure="cman" \
>     expected-quorum-votes="2" \
>     stonith-enabled="false" \
>     no-quorum-policy="ignore"
> rsc_defaults $id="rsc-options" \
>     resource-stickiness="100"
>
> when i want to mount the filesystem manually, this message appear :
> [root at cluster1 ~]# mount /dev/drbd1 /mnt/
> mount point already used or other mount in progress
> error mounting lockproto lock_dlm
>
> but when i check the mount, there is no mount from drbd

what does "ps axf" say?  Is there another mount process running?
Did crm_mon report any errors?  Did you check the system logs?

>
> there is another strange thing, the 1st server (cluster1) cannot reboot. it
> hangs with message 'please standby while rebooting the system'. in the
> reboot process, there are 2 failed action which is related to fencing. i
> didn't configure any fencing yet. one of the failed action is :
> 'stopping cluster
> leaving fence domain .... found dlm lockspace /sys/kernel/dlm/web
> fence_tool : cannot leave due to active system       [FAILED]'
>
> please help me with this problem
>
> --
> Best Regards,
>
> Soni Maula Harriz
> Database Administrator
> PT. Data Aksara Sangkuriang
>
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>




More information about the Pacemaker mailing list