[ClusterLabs] Antw: Re: Regression in Filesystem RA

Christian Balzer chibi at gol.com
Mon Dec 4 23:21:00 EST 2017


Hello,

On Thu, 30 Nov 2017 09:41:00 +0100 Ulrich Windl wrote:

> > Hello,
> > 
> > sorry for the late reply, moving Date Centers tends to keep one busy.
> > 
> > I looked at the PR and while it works and certainly is an improvement, it
> > wouldn't help me in my case much.
> > Biggest issue being fuser and its exponential slowdown and the RA still
> > uses this.
> > 
> > What I did was to recklessly force my crap code into a script:
> > ---
> > #/bin/bash
> > lsof -n |grep $1 |grep DIR| awk '{print $2}'
> > ---  
> 
> Hi!
> 
> I'm not an lsof specialist, but maybe add more options to lsof, and you can
> get rid of the graps and awk, maybe. I mean: lsof examines everything, and you
> pick what you need. Maybe just let lsof output wha you need.
>
In theory a good idea.
But as somebody else said when googling around, "the lsof man page is a
bugger to read".
But read it I did and there is no way to filter/limit the output by TYPE,
only the DIR ones are of interest. 
Thus piping is a must.

So in the script above one can eliminate the first grep stage by using 
"lsof -n $1", but the speed improvement on a machine with 9k processes in
that directory tree is marginal, from something like 4.4 to 4.2s.
 
What gives us more improvement (but still not fantastically so) is
something like this:
---
lsof -n -t $1
---
This works in the use case here, but if somebody where to be interested in
other bits than the process IDs piping it into grep and friends will still
be needed.

About 25% speed improvement, so I'll take that advantage of course.

However the main point with and for lsof is that it slows down linearly
with increasing number of processes, which is crucial.

Regards,

Christian
> > 
> > And call that instead of fuser as well as removing all kill logging by
> > default (determining the number pids isn't free either). 
> > 
> > With that in place it can deal with 10k processes to kill in less than 10
> > seconds.
> > 
> > Regards,
> > 
> > Christian
> > 
> > On Tue, 24 Oct 2017 09:07:50 +0200 Dejan Muhamedagic wrote:
> >   
> >> On Tue, Oct 24, 2017 at 08:59:17AM +0200, Dejan Muhamedagic wrote:  
> >> > [...]
> >> > I just made a pull request:
> >> > 
> >> > https://github.com/ClusterLabs/resource-agents/pull/1042    
> >> 
> >> NB: It is completely untested!
> >>   
> >> > It would be great if you could test it!
> >> > 
> >> > Cheers,
> >> > 
> >> > Dejan
> >> >     
> >> > > Regards,
> >> > > 
> >> > > Christian
> >> > >     
> >> > > > > Maybe we can even come up with a way
> >> > > > > to both "pretty print" and kill fast?      
> >> > > > 
> >> > > > My best guess right now is no ;-) But we could log nicely for the
> >> > > > usual case of a small number of stray processes ... maybe
> >> > > > something like this:
> >> > > > 
> >> > > > 	i=""
> >> > > > 	get_pids | tr '\n' ' ' | fold -s |
> >> > > > 	while read procs; do
> >> > > > 		if [ -z "$i" ]; then
> >> > > > 			killnlog $procs
> >> > > > 			i="nolog"
> >> > > > 		else
> >> > > > 			justkill $procs
> >> > > > 		fi
> >> > > > 	done
> >> > > > 
> >> > > > Cheers,
> >> > > > 
> >> > > > Dejan
> >> > > >     
> >> > > > > -- 
> >> > > > > : Lars Ellenberg
> >> > > > > : LINBIT | Keeping the Digital World Running
> >> > > > > : DRBD -- Heartbeat -- Corosync -- Pacemaker
> >> > > > > : R&D, Integration, Ops, Consulting, Support
> >> > > > > 
> >> > > > > DRBD® and LINBIT® are registered trademarks of LINBIT
> >> > > > > 
> >> > > > > _______________________________________________
> >> > > > > Users mailing list: Users at clusterlabs.org 
> >> > > > > http://lists.clusterlabs.org/mailman/listinfo/users 
> >> > > > > 
> >> > > > > Project Home: http://www.clusterlabs.org 
> >> > > > > Getting started:  
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf 
> >> > > > > Bugs: http://bugs.clusterlabs.org      
> >> > > > 
> >> > > > _______________________________________________
> >> > > > Users mailing list: Users at clusterlabs.org 
> >> > > > http://lists.clusterlabs.org/mailman/listinfo/users 
> >> > > > 
> >> > > > Project Home: http://www.clusterlabs.org 
> >> > > > Getting started:  
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf 
> >> > > > Bugs: http://bugs.clusterlabs.org 
> >> > > >     
> >> > > 
> >> > > 
> >> > > -- 
> >> > > Christian Balzer        Network/Systems Engineer                
> >> > > chibi at gol.com   	Rakuten Communications    
> >> > 
> >> > _______________________________________________
> >> > Users mailing list: Users at clusterlabs.org 
> >> > http://lists.clusterlabs.org/mailman/listinfo/users 
> >> > 
> >> > Project Home: http://www.clusterlabs.org 
> >> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf  
> 
> >> > Bugs: http://bugs.clusterlabs.org    
> >> 
> >> _______________________________________________
> >> Users mailing list: Users at clusterlabs.org 
> >> http://lists.clusterlabs.org/mailman/listinfo/users 
> >> 
> >> Project Home: http://www.clusterlabs.org 
> >> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf 
> >> Bugs: http://bugs.clusterlabs.org 
> >>   
> > 
> > 
> > -- 
> > Christian Balzer        Network/Systems Engineer                
> > chibi at gol.com   	Rakuten Communications
> > 
> > _______________________________________________
> > Users mailing list: Users at clusterlabs.org 
> > http://lists.clusterlabs.org/mailman/listinfo/users 
> > 
> > Project Home: http://www.clusterlabs.org 
> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf 
> > Bugs: http://bugs.clusterlabs.org  
> 
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org


-- 
Christian Balzer        Network/Systems Engineer                
chibi at gol.com   	Rakuten Communications




More information about the Users mailing list