[Orca-users] Heterogeneous OS Env. & Orca

Sean O'Neill sean at seanoneill.info
Mon Jul 7 12:30:13 PDT 2003


At 09:28 AM 7/7/2003 -0700, Tony Pace wrote:
>
>I really require a sanity check in order to justify the level
>of effort required to build and deploy Orca in our enterprise production 
>environment. (and then pay Blair for his s/w :-)
>
>We are a mixed shop OS of
>- AIX 4.33, 5.1 & 5.2
>- Solaris 7,8 & 9
>with some HPUX & Linux.
>
>We are running Oracle, DB2 and leveraging Apache/Tomcat for Web.

I don't know if anyone has done anything for collecting AIX statistics and 
graphing with Orca.

>
>I understand that the Orca process engine (orcallator.se)
>runs "only" in a Solaris environment.
>

True.  There is a tool called the SE Toolkit (http://www.setoolkit.com) 
that uses this orcallator.se script to collect various metrics from a 
Solaris system.

>1) Which is the best Solaris version - 7, 8 or 9? (is the i386 Solaris an 
>option)?

For the graphing server, you can use any OS that supports Perl and RRD 
really.  I have my current graphing being done on a FreeBSD 4.8 
system.  The OS really isn't the important factor.  On the graphing server, 
its the hardware capabilities that are important.  In my experience, its 
usually in this order:  CPU, Memory, Disks.  As time goes on, disk space 
will become an issue assuming you keep all your text data so the more disk 
space the merrier.

>
>2) I am not sure I quite fully understand the mechanism
>or if it is fully possible "how" to feed the telemetry/data from my mixed 
>OS environment to the "orcallator.se/Solaris" engine
>
>- are there Orca agents installed on each host
>- is there a config key for IP host address
>- is it NFS mounted
>- other

Think of Orca like this, its basically two components:  data collection and 
data graphing.  The data collectors get installed on the clients.  The 
grapher portion is installed on another server.

The orcallator.se is a simply a data collector specifically for Solaris - 
nothing more.  Another example of a data collector is procallator.  This 
collects data for Linux systems.  I even wrote a data collector for 
Weblogic SNMP data called Weblogicator.  So you can write a data collector 
for just about anything as long as the collected data is written in a 
specific format which the graphing engine expects.

The graphing engine is where Blair's real magic is.  This can run on pretty 
much any OS that can run Perl and RRD.

In small installations, NFS works fine but as the number of servers 
increases the NFS traffic can get heavy and there is also are the security 
issues with NFS.  Lots of folks use rsync/ssh (using pub key authentication 
- at least that's what I used in my environments) to transfer the data 
between the client machines and the graphing server.  There are several 
folks on the list who have some pretty LARGE installations.  They have 
cooked up some pretty neat configurations to deal with issues CPU/Disk 
saturation on the graphing servers - cuz the graphing engine can be VERY 
CPU / disk intensive.

[ Actually, that's something I wish folks would share more on.  Some folks 
have shared at the "email level" how their large environment are configured 
to do with CPU / disk resource issues for Orca.  Be great if there are a 
section of Blair's Orca page where folks could basically brag about what 
they have and how they did it - diagrams, descriptions, etc.  My 
installations are too small to brag about :) ]

Something you need to think about though and this has nothing to do with 
Orca per se.  It has everything to do with RRD and how it manages 
data.  You need to understand that RRD is all about "averaging" data.

As you move from one level up to the next level in the Orca graphs, e.g. 
Hourly to Daily, RRD takes your hourly data and "averages" it up to the 
daily level and so on.  So technically, you have lost metric detail at the 
Daily level and above.  In the past, several folks have wanted to be able 
to see MAX/MIN metric data from the Hourly level retained at the Daily, 
Weekly, Monthly, Quarterly, Yearly level graphs.  But because of how RRD 
works, no one has found a way to do this.  Basically, your Hourly 100,000 
Network Errors/sec MAX might get averaged down to say 1,000 Network 
Errors/sec MAX at the Daily level.  This bothers some folks.

>
>Great levels of detail will really aid in my proposal
>for project approval.
>

Hopefully, I didn't do a "Krispy Creme glazed doughnut" on your eyes.  Just 
reading through my response, I had to wipe mine off though - LOL


--
Sean O'Neill 




More information about the Orca-users mailing list