[ClusterLabs] Configuring pacemaker to migrate a group of co-located resources if any of them fail

Ken Gaillot kgaillot at redhat.com
Tue Apr 2 10:20:14 EDT 2019


On Tue, 2019-04-02 at 00:24 +0000, Chris Dewbery wrote:
> Hi,
>  
> I have a two node cluster running pacemaker 2.0, which I would like
> to run in an
> active/standby model, that is, where all of the resources run on the
> active node
> and will be migrated to the standby node in the case where any
> resource on the active
> fails.  While I understand this model might seem a little unusual, I 

Actually, two-node active/standby is quite common. From pacemaker's
perspective, there is no need to distinguish the two, but it's a common
administrative model.

> have a number
> of resources that must be co-located and this model significantly
> simplifies the amount
> of testing required as there are only 2 states.
>  
> After reading through the pacemaker documentation the closest
> behavior I have been able
> to achieve is by placing each of my resources into a group like so
>  
> <resources>
>     <group id="group1">
>         <primitive id="A" class="lsb" type="A"/>
>         <primitive id="B" class="lsb" type="B"/>
>         <primitive id="C" class="lsb" type="C"/>
>         <primitive id="D" class="lsb" type="D"/>
>    </group>
> </resources>
>  
>  
> In this case if resource A fails on node1, A,B,C & D are all migrated
> to node2.
> However, in the case where resource B fails, it is simply restarted.
>  
> Is there a way to configure pacemaker so that if any of the resources
> fail within
> this group that all resources are migrated?

Set migration-threshold=1 on the group. Each of the members will
inherit it.

This means that if the resources fail over, they will not be able to
fail back to the original node until you clean the failures (whether
manually or by setting a failure-timeout). If you don't want the
resources to automatically move back as soon as the failures are
cleared, set a resource-stickiness as well.


> To complicate things a little further, I also have a number of
> multistate resources that
> I would like the resource in the Master role to always be co-located
> with each of the resources
> in group1.
>  
> For example.
>  
> <resources>
>     <master id="ms_A">
>       <meta_attributes>
>         <nvpair name="notify" value="true"/>
>         <nvpair name="clone-max" value="2/>
>         <nvpair name="promoted-max" value="1”/>
>         <nvpair name="promoted-node-max" value="1”/>
>       </meta_attributes>
>       <primitive id="A" class="ocf" provider="ocf-scripts" type="A">
>         <operations>
>            . . . . . .
>         </operations>
>       </primitive>
>     </master>
>  
>     <group id="group1">
>         <primitive id="B" class="lsb" type="A"/>
>         <primitive id="C" class="lsb" type="B"/>
>         <primitive id="D" class="lsb" type="C"/>
>         <primitive id="E" class="lsb" type="D"/>
>    </group>
> </resources>
> <constraints>
>     <rsc_colocation id="coloc-1" score="INFINITY" rsc="group1" with-
> rsc="ms_A" with-rsc-role="Master"/>
> </constraints>
>  
>  
> It would be great if these could be co-located in a similar manner,
> where a failure of
> Any resource results in all co-located resources being
> migrated/promoted etc.

It looks like you've already got that (ms_A Master role with group1
constraint). You currently only have a colocation; you also need an
ordering if the master shouldn't be promoted until the group is active.

> Any suggestions as to how I can accomplish this behavior would be
> greatly appreciated.
>  
>  
> Regards,
>  
>  
> Chris
-- 
Ken Gaillot <kgaillot at redhat.com>



More information about the Users mailing list