[Pacemaker] fence timeouts

Satomi TANIGUCHI taniguchis at intellilink.co.jp
Mon Jul 27 04:57:38 EDT 2009


Hi Bernd,

With recent Pacemaker,
you can write "stonith-timeout" in each stonith plugin's <instance_attributes/>
to set its timeout value.

### example(1) ###
<primitive id="clnStonith-kdumpcheck" class="stonith" type="external/kdumpcheck">
           <instance_attributes id="instance_attributes.kdumpcheck">
             <nvpair id="nvpair.kdumpcheck-hostlist" name="hostlist" 
value="rh52dev2 rh52dev1"/>
             <nvpair id="nvpair.kdumpcheck-priority" name="priority" value="1"/>
             <nvpair id="nvpair.kdumpcheck-stonith-timeout" 
name="stonith-timeout" value="60s"/>
           </instance_attributes>
           <operations>
....snip...
           </operations>
<meta_attributes id="primitive-clnStonith-kdumpcheck.meta"/>
</primitive>
##################

And you can write it in <cluster_property_set> too.
As Andrew says. ;)

### example(2) ###
  <cluster_property_set id="cib-bootstrap-options">
         <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" 
value="1.0.1-6fd0eebd186e stable-1.0 tip"/>
         <nvpair id="nvpair.id2388338" name="stonith-enabled" value="true"/>
         <nvpair id="nvpair.id2388339" name="stonith-timeout" value="120s"/>
</cluster_property_set>
##################

"stonith-timeout" in <cluster_property_set> has 2 meanings.
1 -> the default timeout value for each stonith plugin.
2 -> the timeout value for remote-fencing.

I would not recommend using "stonith-timeout" in <cluster_property_set>
as the default stonith-timeout.
Setting stonith-timeout per stonith plugin clearly
and setting the longer value than total amount of each sotnith plugin's timeout
in <cluster_property_set> are better.

If all stonith plugins on DC node are failed,
or the target of fencing is DC node,
DC node broadcasts all other nodes "Hey guys, please fence the target node!",
and waits for a while.
It's the function which I express in the word "remote-fencing".
And this "for a while" is "stonith-timeout" in <cluster_property_set>.
So, if you don't set stonith-timeout per stonith plugin, and
remote-fencing occurs, unexpected timeout would occur...


Regards,
Satomi TANIGUCHI







Bernd Schubert wrote:
> Hello,
> 
> I try to increase the fence timeouts, but I as much as I try, I don't figure 
> out how that works.
> 
> I see this thread 
> http://osdir.com/ml/linux.highavailability.devel/2008-09/msg00128.html
> but also miss the outcome how to set timeouts. 
> 
> ##### Stonith #####
> # very long on_time due to shared onboard NIC / IPMI
> primitive st-ipmi-1 stonith:external/ipmi       \
> params hostname=mds1 ipaddr=192.168.0.11        \
> userid=root passwd=calvin interface=lanplus     \
> min_off_time=60 off_time=60 on_time=120         \
> op start timeout=240                            \
> op stop  timeout=240                            \
> op monitor interval=600 timeout=240
> 
> primitive st-ipmi-2 stonith:external/ipmi       \
> params hostname=mds2 ipaddr=192.168.0.12        \
> userid=root passwd=calvin interface=lanplus     \
> min_off_time=60 off_time=60 on_time=120         \
> op start timeout=240                            \
> op stop  timeout=240                            \
> op monitor interval=600 timeout=240
> 
> location l-st-mds1 st-ipmi-1 -inf: mds1
> location l-st-mds2 st-ipmi-2 -inf: mds2
> ##### Stonith end #####
> 
> (This is with a rewritten external/ipmi, once I have tested it I'm going to 
> send the patch. It has the advantage to check resets, but also needs a long 
> reset time ...).
> 
> I think according the thread above, somewhere there was written the start 
> timeout is used. But then this value is gracefully ignored and the timeout is 
> still 60s. So I tried
> crm_attribute --type op_defaults --attr-name timeout --attr-value 300s
> but this is also not used as default stonith timeout.
> 
> I really would be glad if someone could tell me which value has the default 
> stonith timeout and how to set timeouts per stonith resource.
> 
> 
> Thanks in advance,
> Bernd
> 
> 





More information about the Pacemaker mailing list