[ClusterLabs] Unexpected resource restart
Ken Gaillot
kgaillot at redhat.com
Mon Jan 21 14:20:00 EST 2019
On Mon, 2019-01-21 at 10:44 +0100, Klaus Wenninger wrote:
> On 01/16/2019 04:34 PM, Ken Gaillot wrote:
> > On Wed, 2019-01-16 at 13:41 +0100, Valentin Vidic wrote:
> > > On Wed, Jan 16, 2019 at 12:41:11PM +0100, Valentin Vidic wrote:
> > > > This is what pacemaker says about the resource restarts:
> > > >
> > > > Jan 16 11:19:08 node1 pacemaker-schedulerd[713]: notice: *
> > > > Start dlm:1 ( node2 )
> > > > Jan 16 11:19:08 node1 pacemaker-schedulerd[713]: notice: *
> > > > Start lockd:1 ( node2 )
> > > > Jan 16 11:19:08 node1 pacemaker-schedulerd[713]: notice: *
> > > > Restart gfs2-lvm:0 ( node1 ) due to required
> > > > storage-clone running
> > > > Jan 16 11:19:08 node1 pacemaker-schedulerd[713]: notice: *
> > > > Restart gfs2-fs:0 ( node1 ) due to required
> > > > gfs2-lvm:0 start
> > > > Jan 16 11:19:08 node1 pacemaker-schedulerd[713]: notice: *
> > > > Start gfs2-lvm:1 ( node2 )
> > > > Jan 16 11:19:08 node1 pacemaker-schedulerd[713]: notice: *
> > > > Start gfs2-fs:1 ( node2 )
> > > > Jan 16 11:19:08 node1 pacemaker-schedulerd[713]: notice: *
> > > > Restart ocfs2-lvm:0 ( node1 ) due to required
> > > > storage-clone running
> > > > Jan 16 11:19:08 node1 pacemaker-schedulerd[713]: notice: *
> > > > Restart ocfs2-fs:0 ( node1 ) due to required
> > > > ocfs2-lvm:0 start
> > > > Jan 16 11:19:08 node1 pacemaker-schedulerd[713]: notice: *
> > > > Start ocfs2-lvm:1 ( node2 )
> > > > Jan 16 11:19:08 node1 pacemaker-schedulerd[713]: notice: *
> > > > Start ocfs2-fs:1 ( node2 )
> > >
> > > It seems interleave was required an gfs2 and ocfs2 clones:
> > >
> > > interleave (default: false)
> > > If this clone depends on another clone via an ordering
> > > constraint,
> > > is
> > > it allowed to start after the local instance of the other clone
> > > starts, rather
> > > than wait for all instances of the other clone to start?
> >
> > Exactly, that's the purpose of interleave.
> >
> > In retrospect, interleave=true should have been the default. I've
> > never
> > seen a case where false made sense, and people get bit by
> > overlooking
> > it all the time. False is the default because it's (theoretically
> > at
> > least) safer when there's nothing known about the particular
> > service's
> > requirements.
> >
> > I should've flipped the default at 2.0.0 but didn't think of it.
> > Now
> > we'll have to wait a decade for 3.0.0 :) or maybe we can justify
> > doing
> > it in a minor bump in a few years.
>
> We don't have anything like clone-defaults - right?
> Maybe I'm missing where default-behavior of clones is covered
> already.
> If not that would at least be compatible and one would just have to
> think of that once.
>
> Klaus
Good idea, and yes, you can set it in rsc_defaults. Non-clone resources
will just ignore it.
> >
> > > Now it behaves as expected when the node2 is set online:
> > >
> > > Jan 16 12:35:33 node1 pacemaker-schedulerd[564]: notice: *
> > > Start dlm:1 ( node2 )
> > > Jan 16 12:35:33 node1 pacemaker-schedulerd[564]: notice: *
> > > Start lockd:1 ( node2 )
> > > Jan 16 12:35:33 node1 pacemaker-schedulerd[564]: notice: *
> > > Start gfs2-lvm:1 ( node2 )
> > > Jan 16 12:35:33 node1 pacemaker-schedulerd[564]: notice: *
> > > Start gfs2-fs:1 ( node2 )
> > > Jan 16 12:35:33 node1 pacemaker-schedulerd[564]: notice: *
> > > Start ocfs2-lvm:1 ( node2 )
> > > Jan 16 12:35:33 node1 pacemaker-schedulerd[564]: notice: *
> > > Start ocfs2-fs:1 ( node2 )
> > >
> > > Clone: gfs2-clone
> > > Meta Attrs: interleave=true target-role=Started
> > > Group: gfs2
> > > Resource: gfs2-lvm (class=ocf provider=heartbeat type=LVM-
> > > activate)
> > > Attributes: activation_mode=shared vg_access_mode=lvmlockd
> > > vgname=vgshared lvname=gfs2
> > > Resource: gfs2-fs (class=ocf provider=heartbeat
> > > type=Filesystem)
> > > Attributes: directory=/srv/gfs2 fstype=gfs2
> > > device=/dev/vgshared/gfs2
> > >
> >
> > _______________________________________________
> > Users mailing list: Users at clusterlabs.org
> > https://lists.clusterlabs.org/mailman/listinfo/users
> >
> > Project Home: http://www.clusterlabs.org
> > Getting started:
> > http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > Bugs: http://bugs.clusterlabs.org
>
>
More information about the Users
mailing list