[ClusterLabs] Start resource only if another resource is stopped

Klaus Wenninger kwenning at redhat.com
Fri Aug 19 05:03:08 EDT 2022


On Thu, Aug 18, 2022 at 8:26 PM Andrei Borzenkov <arvidjaar at gmail.com> wrote:
>
> On 17.08.2022 16:58, Miro Igov wrote:
> > As you guessed i am using crm res stop nfs_export_1.
> > I tried the solution with attribute and it does not work correct.
> >
>
> It does what you asked for originally, but you are shifting the
> goalposts ...
>
> > When i stop nfs_export_1 it stops data_1 data_1_active, then it starts
> > data_2_failover - so far so good.
> >
> > When i start nfs_export_1 it starts data_1, starts data_1_active and then
> > stops data_2_failover as result of order data_1_active_after_data_1 and
> > location data_2_failover_if_data_1_inactive.
> >
> > But stopping data_2_failover unmounts the mount and end result is having no
> > NFS export mounted:
> >
>
> Nowhere before did you mention that you have two resources managing the
> same mount point.
>
> ...
> > Aug 17 15:24:52 intranet-test1 Filesystem(data_1)[16382]: INFO: Running
> > start for nas-sync-test1:/home/pharmya/NAS on
> > /data/synology/pharmya_office/NAS_Sync/NAS
> > Aug 17 15:24:52 intranet-test1 Filesystem(data_1)[16382]: INFO: Filesystem
> > /data/synology/pharmya_office/NAS_Sync/NAS is already mounted.
> ...
> > Aug 17 15:24:52 intranet-test1 Filesystem(data_2_failover)[16456]: INFO:
> > Trying to unmount /data/synology/pharmya_office/NAS_Sync/NAS
> > Aug 17 15:24:52 intranet-test1 systemd[1]:
> > data-synology-pharmya_office-NAS_Sync-NAS.mount: Succeeded.
>
> This configuration is wrong - period. Filesystem agent monitor action
> checks for mounted mountpoint, so pacemaker cannot determine which
> resource is started. You may get away with it because by default
> pacemaker does not run recurrent monitor for inactive resource, but any
> probe will give wrong results.
>
> It is almost always wrong to have multiple independent pacemaker
> resources managing the same underlying physical resource.
>
> It looks like you attempt to reimplement high available NFS server on
> client side. If you insist on this, I see as the only solution separate
> resource agent that monitors state of export/data resources and sets
> attribute accordingly. But effectively you will be duplicating pacemaker
> logic.

As Ulrich already pointed out before in this thread, that sounds
a bit as if the concept of promotable resources might be helpful
here - as to have at least part of the logic done by pacemaker.
But as Andrei is saying - you'll need a custom resource-agent here.
Maybe it could be done in a generic way so that the community
might adopt it in the end though. I'm at least not aware that
such a thing would be out there already but ...

Klaus

> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> ClusterLabs home: https://www.clusterlabs.org/
>



More information about the Users mailing list