[Pacemaker] Pacemaker unnecessarily (?) restarts a vm on active node when other node brought out of standby - possible solution?

Ian cl-3627 at jusme.com
Tue May 20 15:29:16 UTC 2014


On , Andrew Beekhof wrote:
> On 19 May 2014, at 4:17 pm, Andrew Beekhof <andrew at beekhof.net> wrote:
> 
>> 
>> On 16 May 2014, at 3:41 am, Ian <cl-3627 at jusme.com> wrote:
>> 
>>> Doing some experiments and Reading TFM, I found this:
>>> 
>>> 5.2.2. Advisory Ordering
>>> When the kind=Optional option is specified for an order constraint, 
>>> the constraint is considered optional and only has an effect when 
>>> both resources are stopping and/or starting. Any change in state of 
>>> the first resource you specified has no effect on the second resource 
>>> you specified.
>>> 
>>> (From 
>>> https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Configuring_the_Red_Hat_High_Availability_Add-On_with_Pacemaker/index.html)
>>> 
>>> This seems to tickle the right area. Adding "kind=Optional" to the 
>>> gfs2 -> drbd order constraint makes it all work as desired (start-up 
>>> and shut-down is correctly ordered,
>> 
>> Not really, it allows gfs2 to start even if drbd can't run anywhere.
>> 
>>> and bringing the other node out of standby doesn't force a gratuitous 
>>> restart of the gfs2 filesystem and the vms that rely on it on the 
>>> already active node).
>>> 
>>> Is that the correct solution I wonder?
>> 
>> Unlikely
> 
> I've filed a bug for this so it doesn't get lost:
> 
>    http://bugs.clusterlabs.org/show_bug.cgi?id=5214
> 
> It may not make the cut for 1.1.12 though since dual masters isn't a
> common use case.

Cheers, was hoping I'd just misconfigured things. Surprised that 
drbd+gfs2 under pacemaker isn't more often used as a low-rent san 
substitute.

Thanks again for your time.





More information about the Pacemaker mailing list