[Orca-users] Old data / parsed data

Francisco Mauro Puente fpuente at cencosud.com.ar
Thu Sep 6 10:36:03 PDT 2007


Thanks Michael,

Here is the situation:

I've used to run orca on the Linux box (Pentium 4 2.8 / 512RAM) and at
the time orca started to run, the disk activity put the system to it
knees...

Since I had all the servers scp'ng the files over to the Linux machine,
I couldn't change it just like that, so I decided to leave the file
where they are, but share the rrd and the html directories onto a Sun
v490 for Orca to run and process remotely, and store the files on the
NFS mounted directory. 

While orca ran on the linux box, the disk and CPU activity caused a
VERY high I/O. (There are some other things running on that box). I'm
processing data for 30 servers, orca dies after some time here.

Now that the files are being processed on the v490, I've managed, with
this, to move the CPU time to the Sun box, but the disk are being
accesed the same way, and the linux starts being useless...nothing else
can be done on the linux at the time orca starts to run.

I'm in process of getting a new server, with 2 o 4 CPU's in order to
run orca.

I'm using RICHPse-3.4.1, will update orca to r529  or later as you
suggested

I've attached one of my server's raw data directory, so you can see the
size of the files

I know a simple 'find' will remove them but once the data is already
generated, couldn't I just remove them all? same thing on the client
side...right? I should keep only the files generated on the html
directory right?

I hope this information helps a bit more.

Regards,
Francisco

>>> David Michaels <dragon at raytheon.com> 06/09/2007 01:01:53 p.m. >>>
NFS takes too much CPU?  That's not right -- are you sure it's NFS 
that's consuming CPU, and not the orca computations?  Also, why are
your 
clients scp'ing the data to the Linux box if the data is being stored 
via NFS?  Why not just scp them to the NFS server, or better yet, just

write them to that location in the first place?

It could be that the I/O overhead of a constant stream of scp'ed data 
coming in is taxing your system beyond what the orca computational 
overhead is generating.  So separating the two is advisable.  In my 
case, the orcallator process on each box writes the data to the data 
directory that orca itself uses.  Orca in turn runs on a v440 (it used

to run on a v210, but as our network grew, that turned out to be 
insufficiently powerful).

The *.bz2 files are the raw data.  My Suns and AIX boxes generate about

17MB of raw bzip2'ed data per year.  If your raw data dirs are much 
bigger than this, and you need them to be smaller, perhaps you should 
adjust what data you're collecting.

Alternatively, you can remove old data files with a simple find 
command.  However, once those old data files are gone, you cannot 
regenerate them.

The RRD files can be regenerated from the raw data files at any time. 

You may have some old RRD files floating around from data that is no 
longer meaningful.  For example, I have data for my QFE interfaces on
my 
Sun servers from 2005, but I disabled the QFE interfaces (and the 
corresponding orcallator.conf file entries) and thus no longer collect

data on them.  I don't really need those corresponding RRD files 
anymore, so I can safely remove them.

By the way, you should probably consider upgrading to the latest orca 
snapshot (r529 or later), and using the orcallator.se file from that 
distribution.  Also, check your RICHPse distribution -- 3.4 is the 
latest, and is recommended.  This might even help your NFS problem, but

that's unlikely.

Hope this helps,
--Dragon

Francisco Mauro Puente wrote:
> Hello List,
>
> I'm using orca-0.27 + orcallator.se 1.37.
>
> I've used to run both, orca and the web server on a machine runing
> linux, but since it generated huge I/O problems (mainly disk), I've
> decided to process all the data, via NFS, on a Sun v490 server. Now
the
> problem is the NFS takes too much cpu on the Linux box.
>
> My problem is: what can I delete from the rrd directory to free up
some
> space?
> All my servers are transferring the orcallator-generated files via
> 'scp' to the Linux box, but files are being kept on both sides,
clients
> and server, eating space very very fast...how are you guys deailng
with
> this? I mean, all the .bz2 keep growing on the client, then
transfered
> to the server, there is no purge implemented in any way?
>
> Any help will be very welcome
>
> Thanks
> Francisco
> _______________________________________________
> Orca-users mailing list
> Orca-users at orcaware.com
> http://www.orcaware.com/mailman/listinfo/orca-users 
>   


-------------- next part --------------
A non-text attachment was scrubbed...
Name: SERVER.LOG
Type: application/octet-stream
Size: 72156 bytes
Desc: not available
URL: </pipermail/orca-users/attachments/20070906/06eaa3f8/attachment-0002.obj>


More information about the Orca-users mailing list