[ClusterLabs] Instant service restart during failback

Ken Gaillot kgaillot at redhat.com
Mon May 8 17:30:22 EDT 2017


If you look in the logs when the node comes back, there should be some
"pengine:" messages noting that the restarts will be done, and then a
"saving inputs in <filename>" message. If you can attach that file (both
with and without the constraint changes would be ideal), I'll take a
look at it.

On 04/21/2017 05:26 AM, Euronas Support wrote:
> Seems that replacing inf: with 0: in some colocation constraints fixes the 
> problem, but still cannot understand why it worked for one node and not for 
> the other.
> 
> On 20.4.2017 12:16:02 Klechomir wrote:
>> Hi Klaus,
>> It would have been too easy if it was interleave.
>> All my cloned resoures have interlave=true, of course.
>> What bothers me more is that the behaviour is asymmetrical.
>>
>> Regards,
>> Klecho
>>
>> On 20.4.2017 10:43:29 Klaus Wenninger wrote:
>>> On 04/20/2017 10:30 AM, Klechomir wrote:
>>>> Hi List,
>>>> Been investigating the following problem recently:
>>>>
>>>> Have two node cluster with 4 cloned (2 on top of 2) + 1 master/slave
>>>> services on it (corosync+pacemaker 1.1.15)
>>>> The failover works properly for both nodes, i.e. when one node is
>>>> restarted/turned in standby, the other properly takes over, but:
>>>>
>>>> Every time when node2 has been in standby/turned off and comes back,
>>>> everything recovers propery.
>>>> Every time when node1 has been in standby/turned off and comes back,
>>>> part
>>>> of the cloned services on node2 are getting instantly restarted, at the
>>>> same second when node1 re-appeares, without any apparent reason (only
>>>> the
>>>> stop/start messages in the debug).
>>>>
>>>> Is there some known possible reason for this?
>>>
>>> That triggers some deja-vu feeling...
>>> Did you have a similar issue a couple of weeks ago?
>>> I remember in that particular case 'interleave=true' was not the
>>> solution to the problem but maybe here ...
>>>
>>> Regards,
>>> Klaus
>>>
>>>> Best regards,
>>>> Klecho




More information about the Users mailing list