[Pacemaker] Convenience Groups - WAS Re: [Linux-HA] Unordered groups (was Re: Is 'resource_set' still experimental?)

Vladislav Bogdanov bubble at hoster-ok.com
Thu Apr 19 17:41:07 EDT 2012

19.04.2012 20:48, David Vossel wrote:
> ----- Original Message -----
>> From: "Alan Robertson" <alanr at unix.sh>
>> To: pacemaker at oss.clusterlabs.org, "Andrew Beekhof" <andrew at beekhof.net>
>> Cc: "Dejan Muhamedagic" <dejan at hello-penguin.com>
>> Sent: Thursday, April 19, 2012 10:22:48 AM
>> Subject: [Pacemaker] Convenience Groups - WAS Re: [Linux-HA] Unordered groups (was Re: Is 'resource_set' still
>> experimental?)
>> Hi Andrew,
>> I'm currently working on a fairly large cluster with lots of
>> resources
>> related to attached hardware.  There are 59 of these things and 24 of
>> those things and so on and each of them has its own resource to deal
>> with the the "things".  They are not clones, and can't easily be made
>> clones.
>> I would like to be able to easily say "shut down all the resources
>> that
>> manage this kind of thing".    The solution that occurs to me most
>> obviously is one you would likely call a "double abomination" ;-) -
>> an
>> unordered and un-colocated group.  It seems a safe assumption that
>> this
>> would not be a good path to pursue given your statements from last
>> year...
>> What would you suggest instead?
> This might be a terrible idea, but this is the first thing that came to mind.
> What if you made a Dummy resource as a sort of control switch for starting/stopping each "group" of resources that control a "thing".  The resource groups wouldn't actually be defined as resource groups, but instead would be defined by order constraints that force a set of resources to start or stop when the Dummy control resource starts/stops.
> So, something like this...
> Dummy resource D1
> thing resource T1
> thing resource T2
> - If you start D1 then T1 and T2 can start.
> - If you stop D1, then T1 and T2 have to stop.
> - If you flip D1 back on, then T1 and T2 start again.
> order set start (D1) then start (T1 and T2)

But, when pacemaker decides to move Dummy to another node, the whole
stack will be restarted, even if Dummy is configured with allow_migration.

I solved this problem for myself with RA which manages cluster ticket,
and other resources depend on that ticket, exploiting it as a cluster

This solution works for me with post-1.1.7 pacemaker (somewhere near the
current master branch).


More information about the Pacemaker mailing list