[ClusterLabs] Unexpected resource restart
Andrew Price
anprice at redhat.com
Wed Jan 16 07:16:04 EST 2019
On 16/01/2019 11:28, Valentin Vidic wrote:
> Hi all,
>
> I'm testing the following configuration with two nodes:
>
> Clone: storage-clone
> Meta Attrs: interleave=true target-role=Started
> Group: storage
> Resource: dlm (class=ocf provider=pacemaker type=controld)
> Resource: lockd (class=ocf provider=heartbeat type=lvmlockd)
>
> Clone: gfs2-clone
> Group: gfs2
> Resource: gfs2-lvm (class=ocf provider=heartbeat type=LVM-activate)
> Attributes: activation_mode=shared vg_access_mode=lvmlockd vgname=vgshared lvname=gfs2
> Resource: gfs2-fs (class=ocf provider=heartbeat type=Filesystem)
> Attributes: directory=/srv/gfs2 fstype=gfs2 device=/dev/vgshared/gfs2
>
> Clone: ocfs2-clone
> Group: ocfs2
> Resource: ocfs2-lvm (class=ocf provider=heartbeat type=LVM-activate)
> Attributes: activation_mode=shared vg_access_mode=lvmlockd vgname=vgshared lvname=ocfs2
> Resource: ocfs2-fs (class=ocf provider=heartbeat type=Filesystem)
> Attributes: directory=/srv/ocfs2 fstype=ocfs2 device=/dev/vgshared/ocfs2
>
> Ordering Constraints:
> storage-clone then gfs2-clone (kind:Mandatory) (id:gfs2_after_storage)
> storage-clone then ocfs2-clone (kind:Mandatory) (id:ocfs2_after_storage)
> Colocation Constraints:
> gfs2-clone with storage-clone (score:INFINITY) (id:gfs2_with_storage)
> ocfs2-clone with storage-clone (score:INFINITY) (id:ocfs2_with_storage)
>
> When node2 is set to standby resource stop running there. However when
> node2 is brought back online, it causes the resources on node1 to stop
> and than start again which is a bit unexpected?
>
> Maybe the dependency between the common storage group and the upper
> gfs2/ocfs2 groups could be written in some other way to prevent this
> resource restart?
>
The only thing that stands out to me with this config is the lack of
ordering constraint between dlm and lvmlockd. Not sure if that's the
issue though.
Andy
More information about the Users
mailing list