<div dir="ltr">Ken, thank you for the answer.<div><br></div><div>Every node in my cluster under normal conditions has "load average" of about 420. It is mainly connected to the high disk IO on the system.</div><div>My system is designed to use almost 100% of its hardware (CPU/RAM/disks), so the situation when the system consumes almost all HW resources is normal. </div><div>I would like to get rid of <span style="font-size:12.8px">"High CPU load detected" messages in the log,</span> because they flood corosync.log as well as system journal.</div><div><br></div><div>Maybe you can give an advice what would be the best way do to it?</div><div><br></div><div>So far I came up with the idea of setting "l<span style="font-size:12.8px">oad-threshold" to 1000% , because of:</span></div><div> 420(<span style="font-size:12.8px">load average) </span>/ 24 (cores) = 17.5 (<span style="font-size:11pt;font-family:Calibri,sans-serif">adjusted_load</span>); </div><div> 2 (<span style="font-size:11pt;font-family:Calibri,sans-serif">THROTLE_FACTOR_HIGH</span><span style="font-size:12.8px">) * 10 (</span><span style="font-size:11pt;font-family:Calibri,sans-serif">throttle_load_target</span><span style="font-size:12.8px">) = 20</span></div><div><span style="font-size:12.8px"><br></span></div><div> if(adjusted_load > THROTTLE_FACTOR_HIGH * throttle_load_target) {</div><div> crm_notice("High %s detected: %f", desc, load);</div><div><br></div><div><br></div><div>In this case do I need to set "node-action-limit" to something less than "2 x cores" (which is default).</div><div>Because the logic is (crmd/throttle.c):</div><div><br></div><div><div> switch(r->mode) {</div><div> case throttle_extreme:</div><div> case throttle_high:</div><div> jobs = 1; /* At least one job must always be allowed */</div><div> break;</div><div> case throttle_med:</div><div> jobs = QB_MAX(1, r->max / 4);</div><div> break;</div><div> case throttle_low:</div><div> jobs = QB_MAX(1, r->max / 2);</div><div> break;</div><div> case throttle_none:</div><div> jobs = QB_MAX(1, r->max);</div><div> break;</div><div> default:</div><div> crm_err("Unknown throttle mode %.4x on %s", r->mode, node);</div><div> break;</div><div> }</div><div> return jobs;</div></div><div><br></div><div><br></div><div>The thing is, I know that there is "<span style="font-size:12.8px">High CPU load" and this is normal state, but I </span><span style="font-size:12.8px">wont</span><span style="font-size:12.8px"> Pacemaker to not saying it to me and treat this state the best it can.</span></div><div><br></div></div><div class="gmail_extra"><br clear="all"><div><div class="gmail_signature"><div dir="ltr"><div><div dir="ltr">Thank you,<div>Kostia</div></div></div></div></div></div>
<br><div class="gmail_quote">On Mon, Mar 14, 2016 at 7:18 PM, Ken Gaillot <span dir="ltr"><<a href="mailto:kgaillot@redhat.com" target="_blank">kgaillot@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On 02/29/2016 07:00 AM, Kostiantyn Ponomarenko wrote:<br>
> I am back to this question =)<br>
><br>
> I am still trying to understand the impact of "High CPU load detected"<br>
> messages in the log.<br>
> Looking in the code I figured out that setting "load-threshold" parameter<br>
> to something higher than 100% solves the problem.<br>
> And actually for 8 cores (12 with Hyper Threading) load-threshold=400% kind<br>
> of works.<br>
><br>
> Also I noticed that this parameter may have an impact on the number of "the<br>
> maximum number of jobs that can be scheduled per node". As there is a<br>
> formula to limit F_CRM_THROTTLE_MAX based on F_CRM_THROTTLE_MODE.<br>
><br>
> Is my understanding correct that the impact of setting "load-threshold"<br>
> high enough (so there is no noisy messages) will lead only to the<br>
> "throttle_job_max" and nothing more.<br>
> Also, if I got it correct, than "throttle_job_max" is a number of allowed<br>
> parallel actions per node in lrmd.<br>
> And a child of the lrmd is actually an RA process running some actions<br>
> (monitor, start, etc).<br>
><br>
> So there is no impact on how many RA (resources) can run on a node, but how<br>
> Pacemaker will operate with them in parallel (I am not sure I understand<br>
> this part correct).<br>
<br>
</span>I believe that is an accurate description. I think the job limit applies<br>
to fence actions as well as lrmd actions.<br>
<br>
Note that if /proc/cpuinfo exists, pacemaker will figure out the number<br>
of cores from there, and divide the actual reported load by that number<br>
before comparing against load-threshold.<br>
<div class="HOEnZb"><div class="h5"><br>
> Thank you,<br>
> Kostia<br>
><br>
> On Wed, Jun 3, 2015 at 12:17 AM, Andrew Beekhof <<a href="mailto:andrew@beekhof.net">andrew@beekhof.net</a>> wrote:<br>
><br>
>><br>
>>> On 27 May 2015, at 10:09 pm, Kostiantyn Ponomarenko <<br>
>> <a href="mailto:konstantin.ponomarenko@gmail.com">konstantin.ponomarenko@gmail.com</a>> wrote:<br>
>>><br>
>>> I think I wasn't precise in my questions.<br>
>>> So I will try to ask more precise questions.<br>
>>> 1. why the default value for "load-threshold" is 80%?<br>
>><br>
>> Experimentation showed it better to begin throttling before the node<br>
>> became saturated.<br>
>><br>
>>> 2. what would be the impact to the cluster in case of<br>
>> "load-threshold=100%”?<br>
>><br>
>> Your nodes will be busier. Will they be able to handle your load or will<br>
>> it result in additional recovery actions (creating more load and more<br>
>> failures)? Only you will know when you try.<br>
>><br>
>>><br>
>>> Thank you,<br>
>>> Kostya<br>
>>><br>
>>> On Mon, May 25, 2015 at 4:11 PM, Kostiantyn Ponomarenko <<br>
>> <a href="mailto:konstantin.ponomarenko@gmail.com">konstantin.ponomarenko@gmail.com</a>> wrote:<br>
>>> Guys, please, if anyone can help me to understand this parameter better,<br>
>> I would be appreciated.<br>
>>><br>
>>><br>
>>> Thank you,<br>
>>> Kostya<br>
>>><br>
>>> On Fri, May 22, 2015 at 4:15 PM, Kostiantyn Ponomarenko <<br>
>> <a href="mailto:konstantin.ponomarenko@gmail.com">konstantin.ponomarenko@gmail.com</a>> wrote:<br>
>>> Another question - is it crmd specific to measure CPU usage by "I/O<br>
>> wait"?<br>
>>> And if I need to get the most performance of the running resources in<br>
>> cluster, should I set "load-threshold=95%" (or even 100%)?<br>
>>> Will it impact the cluster behavior in any ways?<br>
>>> The man page for crmd says that it will "The cluster will slow down its<br>
>> recovery process when the amount of system resources used (currently CPU)<br>
>> approaches this limit".<br>
>>> Does it mean there will be delays in cluster in moving resources in case<br>
>> a node goes down, or something else?<br>
>>> I just want to understand in better.<br>
>>><br>
>>> That you in advance for the help =)<br>
>>><br>
>>> P.S.: The main resource does a lot of disk I/Os.<br>
>>><br>
>>><br>
>>> Thank you,<br>
>>> Kostya<br>
>>><br>
>>> On Fri, May 22, 2015 at 3:30 PM, Kostiantyn Ponomarenko <<br>
>> <a href="mailto:konstantin.ponomarenko@gmail.com">konstantin.ponomarenko@gmail.com</a>> wrote:<br>
>>> I didn't know that.<br>
>>> You mentioned "as opposed to other Linuxes", but I am using Debian Linux.<br>
>>> Does it also measure CPU usage by I/O waits?<br>
>>> You are right about "I/O waits" (a screenshot of "top" is attached).<br>
>>> But why it shows 50% of CPU usage for a single process (that is the main<br>
>> one) while "I/O waits" shows a bigger number?<br>
>>><br>
>>><br>
>>> Thank you,<br>
>>> Kostya<br>
>>><br>
>>> On Fri, May 22, 2015 at 9:40 AM, Ulrich Windl <<br>
>> <a href="mailto:Ulrich.Windl@rz.uni-regensburg.de">Ulrich.Windl@rz.uni-regensburg.de</a>> wrote:<br>
>>>>>> "Ulrich Windl" <<a href="mailto:Ulrich.Windl@rz.uni-regensburg.de">Ulrich.Windl@rz.uni-regensburg.de</a>> schrieb am<br>
>> 22.05.2015 um<br>
>>> 08:36 in Nachricht <<a href="mailto:555EEA72020000A10001A71D@gwsmtp1.uni-regensburg.de">555EEA72020000A10001A71D@gwsmtp1.uni-regensburg.de</a>>:<br>
>>>> Hi!<br>
>>>><br>
>>>> I Linux I/O waits are considered for load (as opposed to other<br>
>> Linuxes) Thus<br>
>>> ^^ "In"<br>
>> s/Linux/UNIX/<br>
>>><br>
>>> (I should have my coffee now to awake ;-) Sorry.<br>
<br>
_______________________________________________<br>
Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
<a href="http://clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://clusterlabs.org/mailman/listinfo/users</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
</div></div></blockquote></div><br></div>