[ClusterLabs] Pacemaker process 10-15% CPU
    Karthikeyan Ramasamy 
    karthikeyan.ramasamy at ericsson.com
       
    Fri Oct 30 10:14:20 UTC 2015
    
    
  
Hello,
  We are using Pacemaker to manage the services that run on a node, as part of a service management framework, and manage the nodes running the services as a cluster.  One service will be running as 1+1 and other services with be N+1.
  During our testing, we see that the pacemaker processes are taking about 10-15% of the CPU.  We would like to know if this is normal and could the CPU utilization be minimised.
Sample Output of most used CPU process in a Active Manager is
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
189      15766 30.4  0.0  94616 12300 ?        Ss   18:01  48:15 /usr/libexec/pacemaker/cib
189      15770 28.9  0.0 118320 20276 ?        Ss   18:01  45:53 /usr/libexec/pacemaker/pengine
root     15768  2.6  0.0  76196  3420 ?        Ss   18:01   4:12 /usr/libexec/pacemaker/lrmd
root     15767 15.5  0.0  95380  5764 ?        Ss   18:01  24:33 /usr/libexec/pacemaker/stonithd
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
189      15766 30.5  0.0  94616 12300 ?        Ss   18:01  49:58 /usr/libexec/pacemaker/cib
189      15770 29.0  0.0 122484 20724 ?        Rs   18:01  47:29 /usr/libexec/pacemaker/pengine
root     15768  2.6  0.0  76196  3420 ?        Ss   18:01   4:21 /usr/libexec/pacemaker/lrmd
root     15767 15.5  0.0  95380  5764 ?        Ss   18:01  25:25 /usr/libexec/pacemaker/stonithd
We also observed that the processes are not distributed equally to all the available cores and saw that Redhat acknowledging that rhel doesn't distribute to the available cores efficiently.  We are trying to use IRQbalance to spread the processes to the available cores equally.
Please let us know if there is any way we could minimise the CPU utilisation.  We dont require stonith feature, but there is no way stop that daemon from running to our knowledge.  If that is also possible, please let us know.
Thanks,
Karthik.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20151030/06131f78/attachment-0003.html>
    
    
More information about the Users
mailing list