[Orca-dev] Re: Is unzipping the percol files during every run necessary

Blair Zajac blair at orcaware.com
Fri Feb 14 18:05:51 PST 2003


kim.taylor at pncbank.com wrote:
> 
> On Friday, February 14, 2003, Chris Jones wrote:
> 
> >I have a problem with the time it's currently taking to process
> >collected data..........
> 
> >Surely we could speed things up by moving the old percol files into an
> archive directory after each invocation (-o -v).
> >Is this possible?? Do we really need to unzip everything each time or
> should the RRD files contain the relevant
> >historical data and thus only need updating with newly collected
> information?
> 
> I've been thinking hard about the same problem, having about 3 years of
> data from upwards of 40 systems.
> 
> It seems at end of day, all of a FILE is already loaded into RRD with the
> exception of the lag time in the loop (~20min in my case.)
> The only value of processing any FILE.gz then is to pick up the last few
> entries missed as the original FILE was close and compressed.
> 
> What if the compression were simply delayed long enough to have thoe
> original FILE processed?
> Something like setting COMPRESSOR="slowgzip.sh" in the
> /etc/init.d/orcallator startup script might do it where:
> 
> -- slowgzip.sh --
> (sleep 3600; gzip -9 $1) &
> 
> Then remove all compression extentions (*.gz and what not) from
> orcallator.cfg and trust your data collector to have FILE available and
> processed before it "disapears".
> 

That's a good idea.  It may take more than an hour for Orca to read
the data if there are a large number of source files being read, so
3600 could be increased.

Let me know how it works.

Best,
Blair

-- 
Blair Zajac <blair at orcaware.com>
Plots of your system's performance - http://www.orcaware.com/orca/


More information about the Orca-dev mailing list