[ClusterLabs] Antw: crm_report consumes all available RAM

Dejan Muhamedagic dejanmm at fastmail.fm
Thu Oct 8 08:42:45 UTC 2015


On Wed, Oct 07, 2015 at 05:20:09PM +0200, Lars Ellenberg wrote:
> On Tue, Oct 06, 2015 at 11:50:00PM +0200, Jan Pokorný wrote:
> > On 06/10/15 10:28 +0200, Dejan Muhamedagic wrote:
> > > On Mon, Oct 05, 2015 at 07:00:18PM +0300, Vladislav Bogdanov wrote:
> > >> 14.09.2015 02:31, Andrew Beekhof wrote:
> > >>> 
> > >>>> On 8 Sep 2015, at 10:18 pm, Ulrich Windl <Ulrich.Windl at rz.uni-regensburg.de> wrote:
> > >>>> 
> > >>>>>>> Vladislav Bogdanov <bubble at hoster-ok.com> schrieb am 08.09.2015 um 14:05 in
> > >>>> Nachricht <55EECEFB.8050001 at hoster-ok.com>:
> > >>>>> Hi,
> > >>>>> 
> > >>>>> just discovered very interesting issue.
> > >>>>> If there is a system user with very big UID (80000002 in my case),
> > >>>>> then crm_report (actually 'grep' it runs) consumes too much RAM.
> > >>>>> 
> > >>>>> Relevant part of the process tree at that moment looks like (word-wrap off):
> > >>>>> USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
> > >>>>> ...
> > >>>>> root     25526  0.0  0.0 106364   636 ?        S    12:37   0:00          \_
> > >>>>> /bin/sh /usr/sbin/crm_report --dest=/var/log/crm_report -f 0000-01-01 00:00:00
> > >>>>> root     25585  0.0  0.0 106364   636 ?        S    12:37   0:00
> > >>>>> \_ bash /var/log/crm_report/collector
> > >>>>> root     25613  0.0  0.0 106364   152 ?        S    12:37   0:00
> > >>>>>     \_ bash /var/log/crm_report/collector
> > >>>>> root     25614  0.0  0.0 106364   692 ?        S    12:37   0:00
> > >>>>>         \_ bash /var/log/crm_report/collector
> > >>>>> root     27965  4.9  0.0 100936   452 ?        S    12:38   0:01
> > >>>>>         |   \_ cat /var/log/lastlog
> > >>>>> root     27966 23.0 82.9 3248996 1594688 ?     D    12:38   0:08
> > >>>>>         |   \_ grep -l -e Starting Pacemaker
> 
> 
> Whoa.
> grep using up 1.5 gig resident (3.2 gig virtual) still looking for
> the first newline.

Amazing.

> I suggest in addition to the (good) suggestions so far,
> to also set a ulimit.

But if we can rely on file(1) to filter out the non-text files,
we should be OK?

Cheers,

Dejan

> 1) export LC_ALL=C
> so grep won't take quadratic time trying to make sure it understands
> unicode correctly; yes, I'm sure that bug has been fix on most systems meanwhile...
> 
> 2)
>  ( ulimit -v 100000 ; grep ) 
> 
> Usually, even with "very many very long lines",
> my grep stays below a few (~3) megabyte.
> A limit of 100M seems to be way too much,
> but if it thinks it needs that much RAM to find a short string,
> then we are very likely not interested in that file.
> 
> 
> -- 
> : Lars Ellenberg
> : http://www.LINBIT.com | Your Way to High Availability
> : DRBD, Linux-HA  and  Pacemaker support and consulting
> 
> DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.
> 
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org




More information about the Users mailing list