[ClusterLabs] Antw: Re: [cluster Labs] standby and unstandby commands

Ulrich Windl Ulrich.Windl at rz.uni-regensburg.de
Tue Nov 29 02:24:06 EST 2016

>>> Ken Gaillot <kgaillot at redhat.com> schrieb am 29.11.2016 um 00:59 in Nachricht
<d568cd7d-32de-4857-cc01-aef3d0162381 at redhat.com>:
> On 11/25/2016 04:33 PM, Omar Jaber wrote:
>> Hi all ,
>> I have cluster contains three  nodes  with different sore  for location 
>> constrain and  I have  group resource 
>> Running  on the  node  the  have  the highest score  for   location
>> constrain when I  try to move  the  resource  from the  node  that have 
>> the highest sore
>> To other  node by run command  "pcs cluster standby <hostname for  the
>> node that have the  highest  location constrain score>"  the  resource 
>> stop in the  node  and  fail in new node(the resource still start-fail
>> stat periodically ) 
>> I thought at the first the  problem  is from different sore but  I
>> changed it and  the  problem still exist   
>> And  when I run "pcs  status "  I  see  there  is action failed :
>> resource_monitor_10000 on hostname for  the new node'not running' (7):
>> call=268, status=complete, exitreason='none',
>>     last-rc-change='Sat Nov 26 00:27:00 2016', queued=0ms, exec=0ms
> When Pacemaker starts a resource, it also schedules any recurring
> monitor configured for it, immediately after the start returns.
> So, it is essential that the resource agent does not return from "start"
> until the service is able to pass a "monitor" call. It's possible that

... or you add a delay in the resource configuration

> the "start" is returning too soon, and the service still has some
> start-up time before it responds to requests. In that case, you'll need
> to modify the resource agent.
> Other possibilities are that the resource agent is returning success
> from start even though the start failed, or that the service is starting
> successfully but then immediately crashing.

Some servers first fork and exit, virtually being successful immediately, while the child (the "server loop") could die a moment later. Finding the perfect time to wait for the server dying is kind of black magic ;-)

> _______________________________________________
> Users mailing list: Users at clusterlabs.org 
> http://clusterlabs.org/mailman/listinfo/users 
> Project Home: http://www.clusterlabs.org 
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf 
> Bugs: http://bugs.clusterlabs.org 

More information about the Users mailing list