[Pacemaker] failed over filesystem mount points not coming up on secondary node

Jake Smith jsmith at argotec.com
Mon Oct 1 21:14:18 UTC 2012




----- Original Message -----
> From: "Lonni J Friedman" <netllama at gmail.com>
> To: "The Pacemaker cluster resource manager" <pacemaker at oss.clusterlabs.org>
> Sent: Monday, October 1, 2012 4:31:05 PM
> Subject: Re: [Pacemaker] failed over filesystem mount points not coming up	on secondary node
> 
> I'm still dead in the water here, and could really use some clues.
> 
> I tried tweaking my config a bit to simplify it, in the hope that it
> would at least work with fewer resources, but that too fails in the
> exact same fashion.  Specifically, the DRBD resource does failover to
> promote the old slave to a master, but the failover IP never gets
> promoted, and the DRBD backed block device is never mounted on the
> new
> master.
> 
> farm-ljf1 used to be the master for all resources.  I stopped
> corosync, intending to failover everything to farm-ljf0.  Since I did
> that, here's how things look:
> ##########
> [root at farm-ljf0 ~]# crm status
> ============
> Last updated: Mon Oct  1 13:06:07 2012
> Last change: Mon Oct  1 12:17:16 2012 via cibadmin on farm-ljf1
> Stack: openais
> Current DC: farm-ljf0 - partition WITHOUT quorum
> Version: 1.1.7-2.fc16-ee0730e13d124c3d58f00016c3376a1de5323cff
> 2 Nodes configured, 2 expected votes
> 4 Resources configured.
> ============
> 
> Online: [ farm-ljf0 ]
> OFFLINE: [ farm-ljf1 ]
> 
>  Master/Slave Set: FS0_Clone [FS0]
>      Masters: [ farm-ljf0 ]
>      Stopped: [ FS0:1 ]
> 
> Failed actions:
>     FS0_drbd_start_0 (node=farm-ljf0, call=53, rc=1,
>     status=complete):
> unknown error
> ##########
> 
> I looked in /var/log/cluster/corosync.log from the time when I
> attempted the failover, and spotted the following:
> #########
> Oct 01 12:56:18 farm-ljf0 lrmd: [924]: info: rsc:FS0_drbd:53: start
> Oct 01 12:56:18 farm-ljf0 lrmd: [924]: info: RA output:
> (FS0_drbd:start:stderr) blockdev:
> Oct 01 12:56:18 farm-ljf0 lrmd: [924]: info: RA output:
> (FS0_drbd:start:stderr) cannot open /dev/drbd0
> Oct 01 12:56:18 farm-ljf0 lrmd: [924]: info: RA output:
> (FS0_drbd:start:stderr) :
> Oct 01 12:56:18 farm-ljf0 lrmd: [924]: info: RA output:
> (FS0_drbd:start:stderr) Wrong medium type
> Oct 01 12:56:18 farm-ljf0 lrmd: [924]: info: RA output:
> (FS0_drbd:start:stderr) mount: block device /dev/drbd0 is
> write-protected, mounting read-only
> Oct 01 12:56:18 farm-ljf0 lrmd: [924]: info: RA output:
> (FS0_drbd:start:stderr)
> Oct 01 12:56:18 farm-ljf0 lrmd: [924]: info: RA output:
> (FS0_drbd:start:stderr) mount: Wrong medium type
> Oct 01 12:56:18 farm-ljf0 lrmd: [924]: info: RA output:
> (FS0_drbd:start:stderr)
> Oct 01 12:56:18 farm-ljf0 crmd: [927]: info: process_lrm_event: LRM
> operation FS0_drbd_start_0 (call=53, rc=1, cib-update=532,
> confirmed=true) unknown error
> Oct 01 12:56:18 farm-ljf0 crmd: [927]: WARN: status_from_rc: Action
> 40
> (FS0_drbd_start_0) on farm-ljf0 failed (target: 0 vs. rc: 1): Error
> Oct 01 12:56:18 farm-ljf0 crmd: [927]: WARN: update_failcount:
> Updating failcount for FS0_drbd on farm-ljf0 after failed start: rc=1
> (update=INFINITY, time=1349121378)
> Oct 01 12:56:18 farm-ljf0 crmd: [927]: info: abort_transition_graph:
> match_graph_event:277 - Triggered transition abort (complete=0,
> tag=lrm_rsc_op, id=FS0_drbd_last_failure_0, mag
> ic=0:1;40:287:0:655c1af8-d2e8-4dfa-b084-4d4d36be8ade, cib=0.34.33) :
> Event failed
> #########
> 
> To my eyes, it looks like the attempt to mount the drbd backed
> storage
> failed.  I don't understand why, as I can manually mount it using the
> exact same parameters in the configuration (which worked fine on the
> master) after the failover.  Perhaps there's some weird race
> condition
> occurring where it tries to mount before the drbd server has failed
> over?
> 
> None of that explains why the failover IP didn't come up on the (old)
> slave.  I don't see any errors or failures in the log with respect to
> ClusterIP.  All I see is:
> #########
> Oct 01 12:56:17 farm-ljf0 pengine: [926]: notice: LogActions: Move
> ClusterIP (Started farm-ljf1 -> farm-ljf0)
> Oct 01 12:56:17 farm-ljf0 crmd: [927]: info: te_rsc_command:
> Initiating action 41: stop ClusterIP_stop_0 on farm-ljf1
> #########
> 
> It looks like it never even tries to bring it up on the (old) slave.
> 
> Anyway, here's the configuration that I was using when all of the
> above transpired:
> ##########
> [root at farm-ljf0 ~]# crm configure show
> node farm-ljf0 \
> 	attributes standby="off"
> node farm-ljf1
> primitive ClusterIP ocf:heartbeat:IPaddr2 \
> 	params ip="10.31.97.100" cidr_netmask="22" nic="eth1" \
> 	op monitor interval="10s" \
> 	meta target-role="Started"
> primitive FS0 ocf:linbit:drbd \
> 	params drbd_resource="r0" \
> 	op monitor interval="10s" role="Master" \
> 	op monitor interval="30s" role="Slave"
> primitive FS0_drbd ocf:heartbeat:Filesystem \
> 	params device="/dev/drbd0" directory="/mnt/sdb1" fstype="xfs"
> group g_services FS0_drbd ClusterIP
> ms FS0_Clone FS0 \
> 	meta master-max="1" master-node-max="1" clone-max="2"
> clone-node-max="1" notify="true"
> location cli-prefer-ClusterIP ClusterIP \
> 	rule $id="cli-prefer-rule-ClusterIP" inf: #uname eq farm-ljf1

This location constraint prevents ClusterIP from running on a node that isn't named farm-ljf1 because it has a score of infinity.  If you want the preference to be node farm-ljf1 then set it to something like 100:.

> colocation fs0_on_drbd inf: g_services FS0_Clone:Master
> order FS0_drbd-after-FS0 inf: FS0_Clone:promote g_services

When you specify actions for a resource in an order statement they are inherited by all the remaining resources unless explicitly defined - so this ends up being:
order FS0__drbd-after-FS0 inf: FS0_Clone:promote g_services:promote

Can't promote the resources that are part of the g_services group (not supported action).  Should change this to be:
order FS0_drbd-after-FS0 inf: FS0_Clone:promote g_services:start


HTH

Jake

> property $id="cib-bootstrap-options" \
> 	dc-version="1.1.7-2.fc16-ee0730e13d124c3d58f00016c3376a1de5323cff" \
> 	cluster-infrastructure="openais" \
> 	expected-quorum-votes="2" \
> 	stonith-enabled="false" \
> 	no-quorum-policy="ignore"
> ##########
> 
> 
> On Thu, Sep 27, 2012 at 3:10 PM, Lonni J Friedman
> <netllama at gmail.com> wrote:
> > Greetings,
> > I've just started playing with pacemaker/corosync on a two node
> > setup.
> >  At this point I'm just experimenting, and trying to get a good
> >  feel
> > of how things work.  Eventually I'd like to start using this in a
> > production environment.  I'm running Fedora16-x86_64 with
> > pacemaker-1.1.7 & corosync-1.4.3.  I have DRBD setup and working
> > fine
> > with two resources.  I've verified that pacemaker is doing the
> > right
> > thing when initially configured.  Specifically:
> > * the floating static IP is brought up
> > * DRBD is brought up correctly with a master & slave
> > * the local DRBD backed mount points are mounted correctly
> >
> > Here's the configuration:
> > #########
> > node farm-ljf0 \
> >         attributes standby="off"
> > node farm-ljf1
> > primitive ClusterIP ocf:heartbeat:IPaddr2 \
> >         params ip="10.31.97.100" cidr_netmask="22" nic="eth1" \
> >         op monitor interval="10s"
> > primitive FS0 ocf:linbit:drbd \
> >         params drbd_resource="r0" \
> >         op monitor interval="10" role="Master" \
> >         op monitor interval="30" role="Slave"
> > primitive FS0_drbd ocf:heartbeat:Filesystem \
> >         params device="/dev/drbd0" directory="/mnt/sdb1"
> >         fstype="xfs"
> > primitive FS1 ocf:linbit:drbd \
> >         params drbd_resource="r1" \
> >         op monitor interval="10s" role="Master" \
> >         op monitor interval="30s" role="Slave"
> > primitive FS1_drbd ocf:heartbeat:Filesystem \
> >         params device="/dev/drbd1" directory="/mnt/sdb2"
> >         fstype="xfs"
> > ms FS0_Clone FS0 \
> >         meta master-max="1" master-node-max="1" clone-max="2"
> > clone-node-max="1" notify="true"
> > ms FS1_Clone FS1 \
> >         meta master-max="1" master-node-max="1" clone-max="2"
> > clone-node-max="1" notify="true"
> > location cli-prefer-ClusterIP ClusterIP \
> >         rule $id="cli-prefer-rule-ClusterIP" inf: #uname eq
> >         farm-ljf1
> > colocation fs0_on_drbd inf: FS0_drbd FS0_Clone:Master
> > colocation fs1_on_drbd inf: FS1_drbd FS1_Clone:Master
> > order FS0_drbd-after-FS0 inf: FS0_Clone:promote FS0_drbd
> > order FS1_drbd-after-FS1 inf: FS1_Clone:promote FS1_drbd
> > property $id="cib-bootstrap-options" \
> >         dc-version="1.1.7-2.fc16-ee0730e13d124c3d58f00016c3376a1de5323cff"
> >         \
> >         cluster-infrastructure="openais" \
> >         expected-quorum-votes="2" \
> >         stonith-enabled="false" \
> >         no-quorum-policy="ignore"
> > #########
> >
> > However, when I attempted to simulate a failover situation (I
> > shutdown
> > the current master/primary node completely), not everything failed
> > over correctly.  Specifically, the mount points did not get
> > mounted,
> > even though the other two elements did failover correctly.
> > 'farm-ljf1' is the node that I shutdown, farm-ljf0 is the node that
> > I
> > expected to inherit all of the resources.  Here's the status:
> > #########
> > [root at farm-ljf0 ~]# crm status
> > ============
> > Last updated: Thu Sep 27 15:00:19 2012
> > Last change: Thu Sep 27 13:59:42 2012 via cibadmin on farm-ljf1
> > Stack: openais
> > Current DC: farm-ljf0 - partition WITHOUT quorum
> > Version: 1.1.7-2.fc16-ee0730e13d124c3d58f00016c3376a1de5323cff
> > 2 Nodes configured, 2 expected votes
> > 7 Resources configured.
> > ============
> >
> > Online: [ farm-ljf0 ]
> > OFFLINE: [ farm-ljf1 ]
> >
> >  ClusterIP      (ocf::heartbeat:IPaddr2):       Started farm-ljf0
> >  Master/Slave Set: FS0_Clone [FS0]
> >      Masters: [ farm-ljf0 ]
> >      Stopped: [ FS0:0 ]
> >  Master/Slave Set: FS1_Clone [FS1]
> >      Masters: [ farm-ljf0 ]
> >      Stopped: [ FS1:0 ]
> >
> > Failed actions:
> >     FS1_drbd_start_0 (node=farm-ljf0, call=23, rc=1,
> >     status=complete):
> > unknown error
> >     FS0_drbd_start_0 (node=farm-ljf0, call=24, rc=1,
> >     status=complete):
> > unknown error
> > #########
> >
> > I eventually brought up the shut down node (farm-ljf1) again,
> > hoping
> > that might at least bring things back into a good state, but its
> > not
> > working either, and is showing up as OFFLINE:
> > ##########
> > [root at farm-ljf1 ~]# crm status
> > ============
> > Last updated: Thu Sep 27 15:06:54 2012
> > Last change: Thu Sep 27 14:49:06 2012 via cibadmin on farm-ljf1
> > Stack: openais
> > Current DC: NONE
> > 2 Nodes configured, 2 expected votes
> > 7 Resources configured.
> > ============
> >
> > OFFLINE: [ farm-ljf0 farm-ljf1 ]
> > ##########
> >
> >
> > So at this point, I've got two problems:
> > 0) FS mount failover isn't working.  I'm hoping this is some silly
> > configuration issue that can be easily resolved.
> > 1) bringing the "failed" farm-ljf1 node back online doesn't seem to
> > work automatically, and I can't figure out what kind of magic is
> > needed.
> >
> >
> > If this stuff is documented somewhere, I'll gladly read it, if
> > someone
> > can point me in the right direction.
> >
> > thanks!
> 
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> 
> Project Home: http://www.clusterlabs.org
> Getting started:
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
> 
> 




More information about the Pacemaker mailing list