[Orca-users] Re: Problem with LARGE data values displayed in Orca 0.23

Blair Zajac bzajac at geostaff.com
Fri Aug 13 17:49:29 PDT 1999


Here's a test script for the NFS call rate problem.  Run this and save
the output:


#include <p_iostat_class.se>
#include <p_netstat_class.se>
#include <p_vmstat_class.se>
#include <pure_rules.se>
#include <live_rules.se>
lr_rpcclient_t  lr_rpcclient$r;
lr_rpcclient_t  tmp_lr_rpcclient;

int main()
{
  long now;
  char tm_buf[32];
  tm_t tm_now;
  while (1 == 1) {
    tmp_lr_rpcclient = lr_rpcclient$r;
    now    = time(0);
    tm_now = localtime(&now);
    strftime(tm_buf, sizeof(tm_buf), "%Y-%M-%D %T", tm_now);
    printf("%10d %s %25.5f\n", now, tm_buf, tmp_lr_rpcclient.calls);
    sleep(300);
  }

  return 0;
}


Blair


Sean O'Neill wrote:
> 
> From: "Sean O'Neill" <sean.oneill at appnet.com>
> 
> Below are partial listings (lots of data).
> 
> - The pp_kernel column changed but the size of the number stayed constant
> throughout the listing.
> 
> # /usr/local/orca/bin/orcallator_column -c pagestotl -c pp_kernel -c
> free_pages orcallator-*
>               Machine  locltime pagestotl pp_kernel free_pages
> orcallator-1999-08-06  07:35:00 255067.00 4294945675.00   4171.00
> orcallator-1999-08-06  07:40:00 255067.00 4294945675.00   4188.00
> orcallator-1999-08-06  07:45:00 255067.00 4294945655.00   4060.00
> orcallator-1999-08-06  07:50:00 255067.00 4294945659.00   4034.00
> orcallator-1999-08-06  07:55:00 255067.00 4294945674.00   4070.00
> orcallator-1999-08-06  08:00:00 255067.00 4294945671.00   4073.00
> orcallator-1999-08-06  08:05:00 255067.00 4294945682.00   4076.00
> orcallator-1999-08-06  08:10:00 255067.00 4294945686.00   4099.00
> 
> - The nfs_call/s column was zero except for the two data listings below.
> Like I said in my previous post, this system is an NFS client not a server
> (whether that is important or not I'm not sure).
> 
> # /usr/local/orca/bin/orcallator_column -c nfs_call/s orcallator-*
>               Machine  locltime nfs_call/s
> orcallator-1999-08-05  08:45:00      0.00
> orcallator-1999-08-05  08:50:00      0.00
> orcallator-1999-08-05  08:55:01 727718711.28
> orcallator-1999-08-05  09:00:00      0.00
> orcallator-1999-08-05  09:05:00      0.00
> orcallator-1999-08-05  09:10:00 1145324612.27
> orcallator-1999-08-05  09:15:00      0.00
> orcallator-1999-08-05  09:20:00      0.00
> 
> Since these files were generated by orcallator.se this looks like its an SE
> Toolkit or orcallator.se file problem ??
> 
> > -----Original Message-----
> > From: Blair Zajac [mailto:bzajac at geostaff.com]
> > Sent: Thursday, August 05, 1999 11:21 PM
> > To: orca-help at onelist.com
> > Cc: Sean O'Neill
> > Subject: Re: [orca-help] Problem with LARGE data values displayed in
> > Orca 0.23
> >
> >
> > Sean,
> >
> > Can you check the text files orcallator.se is generating to see if the
> > problem is from orcallator.se or somewhere after orcallator?  There
> > is a program that comes with orca named orcallator_column that
> > will display the values for a particular column if you give it the
> > -c option followed by the column name.
> >
> > You'll probably want to run
> >
> > orcallator_column -c pagestotl -c pp_kernel -c free_pages percol-*
> >
> > and
> >
> > orcallator_column -c nfs_call/s percol-*
> >
> > If these values look fine but the plots don't, send me the percol-*
> > files and I'll try to see what is going on when them.
> >
> > Blair
> >
> >
> > Sean O'Neill wrote:
> >
> > > From: "Sean O'Neill" <sean.oneill at appnet.com>
> > >
> > > I'm having a problem with Orca 0.23 in how it displays data for the
> > > following system:
> > >
> > > Ultra 450
> > > Solaris 2.6
> > > 2GB of memory
> > > Orca 0.23 (running orcallator only - RRD running on another
> > system - RRD is
> > > the
> > > version distributed with Orca v0.23)
> > >
> > > In the Daily Page Usage chart, I get the following numbers
> > for Current
> > > usage:
> > >
> > > Kernel: 4294941604.000000
> > > Free List: 202801.000000
> > > Other: 211219.000000
> > > System Total: 255067.000000
> > >
> > > Because the Kernel is showing up so large (I'm not paging
> > thrashing BTW - my
> > > kernel isn't this big), the graph is completely green.  I
> > have another
> > > system running Solaris 7 (much smaller system) in which
> > this graph appears
> > > to work fine.  Also, the NFS Call Rate graph shows the
> > exact same behavior:
> > >
> > > Average: 10137670.554552
> > > Max: 1145324612.267000
> > >
> > > The data above was shown after the system had been up only
> > 2 hours.  BTW,
> > > I'd like to see any system that could handle this kind of
> > load !!!! :)  This
> > > system is using NFS to write the orcallator data to the RRD server.
> > >
> > > Any ideas on why Kernel and the NFS call rate are showing
> > up so large?  Is
> > > this a SE Tool problem?  Something in the Orca
> > configuration I can tweak?
> > >
> > > =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
> > > Sean O'Neill
> > > AppNet, Inc.
> > > 301-953-3330
> > > soneill at cen.com
> > > soneill at centurycomputing.com
> > > sean.oneill at appnet.com
> > >
> > > =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
> > > Sean O'Neill
> > > AppNet, Inc.
> > > 301-953-3330
> > > soneill at cen.com
> > > soneill at centurycomputing.com
> > >
> > > --------------------------- ONElist Sponsor
> > ----------------------------
> > >
> > > Start a new ONElist list & you can WIN great prizes!
> > > For details on ONElist?s NEW FRIENDS & FAMILY program, go to
> > > http://www.onelist.com/info/onereachsplash3.html
> > >
> > >
> > --------------------------------------------------------------
> > ----------
> >
> >
> 
> --------------------------- ONElist Sponsor ----------------------------
> 
> Start a new ONElist list & you can WIN great prizes!
> For details on ONElist?s NEW FRIENDS & FAMILY program, go to
> http://www.onelist.com/info/onereachsplash3.html
> 
> ------------------------------------------------------------------------



More information about the Orca-users mailing list