[ClusterLabs] Antw: crm_report consumes all available RAM

Lars Ellenberg lars.ellenberg at linbit.com
Wed Oct 7 11:44:51 EDT 2015


On Wed, Oct 07, 2015 at 05:39:01PM +0200, Lars Ellenberg wrote:
> Something like the below, maybe.
> Untested direct-to-email PoC code.
> 
> if echo . | grep -q -I . 2>/dev/null; then
> 	have_grep_dash_I=true
> else
> 	have_grep_dash_I=false
> fi
> # similar checks can be made for other decompressors
> 
> mygrep()
> {
> 	(
> 	# sub shell for ulimit
> 
> 	# ulimit -v ... but maybe someone wants to mmap a huge file,
> 	# and limiting the virtual size cripples mmap unnecessarily,
> 	# so let's limit resident size instead.  Let's be generous, when
> 	# decompressing stuff that was compressed with xz -9, we may
> 	# need ~65 MB according to my man page, and if it was generated
> 	# by something else, the decompressor may need even more.
> 	# Grep itself should not use much more than single digit MB,
> 	# so if the pipeline below needs more than 200 MB resident,
> 	# we probably are not interested in that file in any case.
> 	#
> 	ulimit -m 200000

Bah. scratch that.
RLIMIT_RSS No longer has any effect on linux 2.6.
so we are back to
	ulimit -v 200000
> 
> 	# Actually no need for "local" anymore,
> 	# this is a subshell already. Just a habbit.
> 
> 	local file=$1
> 	case $file in
> 	*.bz2) bzgrep "$file";; # or bzip2 -dc  | grep, if you prefer
> 	*.gz)  zgrep "$file";;
> 	*.xz)  xzgrep "$file";;
> 	# ...
> 	*)
> 		local file_type=$(file "$file")
> 		case $file_type in
> 		*text*)
> 			grep "$file" ;;
> 		*)
> 			# try anyways, let grep use its own heuristic
> 			$have_grep_dash_I && grep --binary-files=without-match "$file" ;;
> 		esac ;;
> 	esac
> 	)
> }

-- 
: Lars Ellenberg
: http://www.LINBIT.com | Your Way to High Availability
: DRBD, Linux-HA  and  Pacemaker support and consulting

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.




More information about the Users mailing list