<html><body><p>Hi all..<br><br>Over the past few days, I noticed that pcsd and ruby process is pegged at 99% CPU, and commands such as<br>pcs status pcsd take up to 5 minutes to complete. On all active cluster nodes, top shows: <br><br><font face="Arial">PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND<br>27225 haclust+ 20 0 116324 91600 23136 R 99.3 0.1 1943:40 cib</font><br><font face="Arial">23277 root 20 0 12.868g 8.176g 8460 S 99.7 13.0 407:44.18 ruby</font><br><br>The system log indicates High CIB load detected over the past 2 days: <br><br>[root@zs95kj ~]# grep "High CIB load detected" /var/log/messages |grep "Feb 3" |wc -l<br>1655<br>[root@zs95kj ~]# grep "High CIB load detected" /var/log/messages |grep "Feb 2" |wc -l<br>1658<br>[root@zs95kj ~]# grep "High CIB load detected" /var/log/messages |grep "Feb 1" |wc -l<br>147<br>[root@zs95kj ~]# grep "High CIB load detected" /var/log/messages |grep "Jan 31" |wc -l<br>444<br>[root@zs95kj ~]# grep "High CIB load detected" /var/log/messages |grep "Jan 30" |wc -l<br>352<br><br><br>The first entries logged on Feb 2 started around 8:42am ... <br><br>Feb 2 08:42:12 zs95kj crmd[27233]: notice: High CIB load detected: 0.974333<br><br>This happens to coincide with the time that I had caused a node fence (off) action by creating a iface-bridge resources and specified<br>a non-existent vlan slave interface (reported to the group yesterday in a separate email thread). It also happened to cause me to lose <br>quorum in the cluster, because 2 of my 5 cluster nodes were already offline. <br><br>My cluster currently has just over 200 VirtualDomain resources to manage, plus one iface-bridge resource and one iface-vlan resource. <br>Both of which are currently configured properly and operational. <br><br>I would appreciate some guidance how to proceed with debugging this issue. I have not taken any recovery actions yet. <br>I considered stopping the cluster, recycling pcsd.service on all nodes, restarting cluster... and also, reboot the nodes, if<br>necessary. But, didn't want to clear it yet in case there's anything I can capture while in this state. <br><br>Thanks.. <br><br>Scott Greenlese ... KVM on System Z - Solutions Test, Poughkeepsie, N.Y.<br> INTERNET: swgreenl@us.ibm.com <BR>
</body></html>