[ClusterLabs] Informing RAs about recovery: failed resource recovery, or any start-stop cycle?

Adam Spiers aspiers at suse.com
Sat Jun 25 12:27:39 EDT 2016

Ken Gaillot <kgaillot at redhat.com> wrote:
> On 06/24/2016 05:41 AM, Adam Spiers wrote:
> > Andrew Beekhof <abeekhof at redhat.com> wrote:
> >> On Fri, Jun 24, 2016 at 1:01 AM, Adam Spiers <aspiers at suse.com> wrote:
> >>> Andrew Beekhof <abeekhof at redhat.com> wrote:
> >>>>> Earlier in this thread I proposed
> >>>>> the idea of a tiny temporary file in /run which tracks the last known
> >>>>> state and optimizes away the consecutive invocations, but IIRC you
> >>>>> were against that.
> >>>>
> >>>> I'm generally not a fan, but sometimes state files are a necessity.
> >>>> Just make sure you think through what a missing file might mean.
> >>>
> >>> Sure.  A missing file would mean the RA's never called service-disable
> >>> before,
> >>
> >> And that is why I generally don't like state files.
> >> The default location for state files doesn't persist across reboots.
> >>
> >> t1. stop (ie. disable)
> >> t2. reboot
> >> t3. start with no state file
> > 
> > Well then we simply put the state file somewhere which does persist
> > across reboots.
> There's also the possibility of using a node attribute. If you set a
> normal node attribute, it will abort the transition and calculate a new
> one, so that's something to take into account. You could set a private
> node attribute, which never gets written to the CIB and thus doesn't
> abort transitions, but it also does not survive a complete cluster stop.

Interesting idea, although I wonder if there is a good solution to
either of these challenges.  Aborting the current transition sounds
bad, and we would certainly want the state to survive a cluster stop,
otherwise we risk the exact issue Andrew described above.

Also, since the state is per-node, I'm not convinced there's a huge
advantage to sharing it cluster-wide, which is why I proposed the
local filesystem as the store for it.  But I'm open to suggestions of
course :-)

More information about the Users mailing list