[Orca-users] tracking file sizes

Sean O'Neill sean at seanoneill.info
Tue Apr 8 10:10:29 PDT 2003


At 11:03 AM 4/8/2003 -0500, Karl.Rossing at Federated.CA wrote:
>Hi,
>
>I'd like to track the file size on disk of my progress databases.
>
>Can someone point me in the right direction.

I've attached some files that should give you an example of what can be 
done for stuff outside of the usual Orca stuff.  You will need to really 
look at it and tweak it for what you want - which should be fairly trivial 
considering what you are using it for.

This is probably a much heavier example then what you are asking for but 
its all I have.

The example I attached it something I wrote for collecting and graphing 
Weblogic 5.1 SNMP statistics.

weblogicator.pl => This is a Perl script that does two things:

1) Manages the collected data files just like orcallator.se does e.g. it 
rolls them at midnight, compresses them as necessary, etc.  This is really 
the meat of the script.
2) Collects the SNMP data from WL.  This is the simple part of the script.

Hopefully, you know Perl.  So you can strip out the SNMP collection stuff 
and just use the log management stuff for what you want.  Then update the 
main loop of the script to simply grab the size of the your database in 
each iteration of the loop.

weblogicator.ksh => This is simply show I started the script.

weblogicator.cfg => This is the .cfg file for creating the graphs.  I 
unfortunately don't have any examples of the graphs around.  I nuked them 
by accident a while back.  I could probably recreate them as I still have 
the .rrd files for the data but I'm lazy {grin}

weblogicator-2002-11-14-000 => This is a sample of the data collected.  You 
can match up the column headers in this file with the weblogicator.cfg to 
see how the graphs are created from this data.  And yes the negative 
numbers in the data are from WL 5.1 - their attempt at performance data in 
that version has some nasty problems which they have no plans on fixing in 5.1.


--
........................................................
......... ..- -. .. -..- .-. ..- .-.. . ... ............
.-- .. -. -... .-.. --- .-- ... -.. .-. --- --- .-.. ...

Sean O'Neill 
-------------- next part --------------
#!/usr/local/bin/perl
# -*- mode: Perl -*-

# Author: Sean O'Neill
# Date:   Oct, 2002
#         sean at deletethistoemail.seanoneill.info.deletethistoemail

# This script basically reads various OIDs from a Weblogic SNMP instance.

# This script in its current form this works for how we have Weblogic setup.  It
# may need some tweaking to work with how others have Weblogic setup.
# This is specific to the session name of Weblogic.  Ours are called session1
# and session2.  This effects the -S switch value and the OIDs definitions.

# This script trys to simulate how the the orcallator.se script handles log
# files for Orca e.g. opening new log file at midnight (or there abouts) and
# compressing the previous days log and after a restart compressing the previous
# log file.  I think is comes pretty close to how orcallator.se does it.

# The snmpget and snmpgettable subroutines in this script came from a Perl
# script submitted to the orca-users at yahoogroups.com mailing list by Adam
# Levin.  Much appreciated :)  I'm not using snmpgettable in this script
# but I left it in here as a reference.

# The become_daemon and open_pid_file subroutines came from the "Network
# Programming with Perl" book by Lincoln Stein.  Very nice book - get it.

# The recursive_mkdir subroutine came from a posting I found in the net.
# Thanks whoever you are.

#
# This is my first attempt at using Perl packages and objects.  I was having
# a TERRIBLE time with scoping when I first wrote this thing and using
# packages and objects made all that a bad memory - for the most part.  So
# be nice if you have comments on how I did this ;> I'm not a programmer - by
# trade anyway.
#

# Needs fixing:
#   - Script will die if SNMP agent on remote machine terminates
#   - Script will terminate if SNMP community string is wrong
#   = Not sure how to distinguish the two.  Maybe maintain a flag indicating
#     a sucessful communication occurred in the past.  So first time in if
#     fail then die otherwise set all values to 0 or something.

use IO::File;
use SNMP_Session "0.53";
use BER "0.50";
use Getopt::Std;
use POSIX 'setsid';

#use strict;

###### SUBROUTINES DEFINITIONS for OutputLog PACKAGE ########

package OutputLog;

#
# Create new OutputLog class object
#
sub new {

   use constant COMPRESS => "gzip";
   use constant COMPRESSARG => "-2";
   use constant COMPRESSEXT => ".gz";

   my ($outputdir, $basename, $year, $month, $day) = @_;
   my $r_outputlog = {
      "outputdir"     => $outputdir,
      "basename"      => $basename,
      "year"          => $year,
      "month"         => $month,
      "day"           => $day,
      "fileincrement" => 0,
      "compressutil"  => COMPRESS,
      "compressarg"   => COMPRESSARG,
      "compressext"   => COMPRESSEXT,
      "FD"            => ""
  };
  bless $r_outputlog, 'OutputLog';
  return $r_outputlog;
}

#
# This doesn't nothing more then return to fully qualified filename as currently defined within
# the OutputLog object.
#
sub getoutputlogfilename {
   my $r_outputlog = shift;

   return "STDOUT" if( $r_outputlog->{'outputdir'} eq "STDOUT" );

   return sprintf("%s/%s-%04d-%02d-%02d-%03d", $r_outputlog->{'outputdir'}, $r_outputlog->{'basename'},  $r_outputlog->{'year'}, $r_outputlog->{'month'}, $r_outputlog->{'day'}, $r_outputlog->{'fileincrement'});

}

#
# This figures out what the OutputLog object data {'fileincrement'} should 
# basically be. If current day files already exist that aren't compressed, it 
# compresses them.
#
sub determineoutputlogfilename {
   my $r_outputlog = shift;
   my $flag = 1;
   my $filename;

   if( $r_outputlog->{'outputdir'} eq "STDOUT" ) {
      return "STDOUT";
   } else {
      while( $flag ) {
         $filename = getoutputlogfilename($r_outputlog);
         #
         # If current filename AND current filename COMPRESSED already exists,
         # delete the compressed file, recompress, and increment 
         # {'fileincrement'}
         #
         if( -f $filename && -f $filename . $r_outputlog->{'compressext'} ) {
            unlink($filename . $r_outputlog->{'compressext'});
            compressoutputlog( $r_outputlog->{'compressutil'}, $r_outputlog->{'compressarg'}, $filename );
            $r_outputlog->{'fileincrement'}++;
            next;
         }
         #
         # If current filename already exists, compress it, and increment 
         # {'fileincrement'}
         #
         if( -f $filename ) {
            compressoutputlog( $r_outputlog->{'compressutil'}, $r_outputlog->{'compressarg'}, $filename );
            $r_outputlog->{'fileincrement'}++;
            next;
         }
         #
         # If current filename COMPRESS already exists, simply increment 
         # {'fileincrement'}
         #
         if( -f $filename . $r_outputlog->{'compressext'} ) {
            $r_outputlog->{'fileincrement'}++;
            next;
         }
         $flag = 0;
      }
   }
}


#
# I think the title say nuff ...
#
sub compressoutputlog {

   #
   # @_ is passed in the correct order already for system():
   # command argument argument
   # gzip -9 filename
   #
   system(@_) == 0 or die "system @_ failed";

}

#
# I think the title say nuff ...
# This subroutine passes back a reference to the file descriptor.
#
sub openoutputlog {

   my $r_outputlog = shift;
   my $filename = getoutputlogfilename($r_outputlog);

   local *OUTFH;

   if( $filename eq "STDOUT" ) {
      open(OUTFH, ">&STDOUT") or die "Can't output STDOUT: $!\n";
   } else {
      open(OUTFH, ">$filename") or die "Can't create $filename: $!\n";
   }
   $r_outputlog->{'FD'} = *OUTFH;
   return *OUTFH;

}

#
# I think the title say nuff ...
#
sub closeoutputlog {
   my $r_outputlog = shift;

   close $r_outputlog->{'FD'};

}

###### SUBROUTINES DEFINITIONS for MAIN PACKAGE ########

package main;

sub become_daemon() {

   die "Can't fork" unless defined (my $child = fork);
   exit(0) if $child; # Parent dies
   setsid();          # Become session leader
   open(STDIN, "</dev/null");
   if( $verbose ) {
      open(STDERR, ">&STDOUT");
   } else {
      open(STDERR, ">/dev/null");
   }
   chdir '/local/var/log';  # Change working directory
   $ENV{PATH} = '/bin:/usr/sbin:/usr/bin:/usr/local/bin';
   return $$;

}

sub print_usage() {

   print <<EOF;

Usage:
    weblogicator.pl [-options]
USAGE: weblogicator.pl [-options] community\\\@server
EXAMPLES: Ouptut 5 interactions of stats and sleep 5 seconds between each

      The following output to the screen:
         weblogicator.pl -D STDOUT c 5 -S 5 public\\\@staging-app01

      The following goes into background mode and outputs to 
      /local/home/perfboy/orca/var/orca/weblogicator/staging-app01/weblogicator-YYYY-MM-DD-###
         weblogicator.pl -d public\\\@staging-app01

OPTIONS:
      -b <basename>  Basename of outputfile.  Defaults to "weblogicator".
      -c <count>     Number of times to run through main loop
      -D <directory> Directory to write data into - Defaults to:
                     /local/home/perfboy/orca/var/orca/weblogicator
                     Value can also be STDOUT for output to screen.  STDOUT can not
                     be specific in daemon mode.
      -d             Become daemon
      -h             Display this help text
      -p <port #>    Weblogic SNMP agent port number - defaults to 161
      -P <pid filename> filename to use for storing the child process pid value
                     Useful if you want to run multiple instances of this
                     script on a machine and avoid pid filename collisions.
      -s <session#>  [ 1 | 2 ] - defaults to 1
      -S <seconds>   Sleep time in seconds - defaults to 60
      -v             Verbose - gives a little more output during processing

EOF
   exit(0);

}

sub generate_outputdir($$) {
   (my $outputdir, $router) = @_;
   return sprintf("%s/%s", $outputdir, $router);
}

sub recursive_mkdir($) { 
   #-----------------------------------------------------
   my $path = shift;
   my @dirs  = split "/" => $path;
   foreach my $dir (@dirs) {
      $tmp .= "$dir/";
      unless ( -e $tmp and -d _ ) {
         mkdir($tmp, 0755) || die "Cannot make $tmp: $!";
      }
      next;
   }
}


sub open_pid_file($) {

   my $file = shift;
   if (-e $file) {
      my $fh = IO::File->new($file) || return;
      my $pid = <$fh>;
      die "Server already running with PID $pid" if kill 0 => $pid;
      warn "Removing PID file for defunct server process $pid.\n" if ( $verbose );
      die "Can't unlink PID file $file" unless -w $ file && unlink $file;
   }
   warn "Opening pid file: $file\n" if ( $verbose );
   return IO::File->new($file, O_CREAT|O_EXCL|O_WRONLY, 0644) or die "Can't create $file:
$!\n";
}

sub snmpget{
   my($host, $community, $port, @vars) = @_;
   my(@enoid, $var, $response, $bindings, $binding, $value, $inoid, $outoid,
      $upoid, $oid,@ retvals);
   my($hackcisco);
   foreach $var (@vars) {
      die "Unknown SNMP var $var\n"
      unless $snmpget::OIDS{$var} || $var =~ /^\d+[\.\d+]*\.\d+$/;
      if ($var =~ /^\d+[\.\d+]*\.\d+/) {
         push @enoid,  encode_oid((split /\./, $var));
         $hackcisco = 1;
      } else {
         push @enoid,  encode_oid((split /\./, $snmpget::OIDS{$var}));
         $hackcisco = 0;
      }
   }
   srand();
   my $session = SNMP_Session->open ($host ,
                                 $community,
                                 $port);
   if ($session->get_request_response(@enoid)) {
      $response = $session->pdu_buffer;
      ($bindings) = $session->decode_get_response ($response);
      $session->close ();
      while ($bindings) {
         ($binding,$bindings) = decode_sequence ($bindings);
         ($oid,$value) = decode_by_template ($binding, "%O%@");
         my $tempo = pretty_print($value);
         $tempo=~s/\t/ /g;
         $tempo=~s/\n/ /g;
         $tempo=~s/^\s+//;
         $tempo=~s/\s+$//;

         push @retvals,  $tempo;
      }

      return (@retvals);
   } else {
      if ($hackcisco) {
         return ("");
      } else {
         die "No answer from $ARGV[0]. You may be using the wrong community\n";
      }
   }
}

#
# This subroutine is NOT being used at this time.
#
sub snmpgettable{
  my($host, $community, $port, $var)  = @_;
  my($next_oid,$enoid,$orig_oid,
     $response, $bindings, $binding, $value, $inoid,$outoid,
     $upoid,$oid, at table,$tempo);
  die "Unknown SNMP var $var\n"
    unless $snmpget::OIDS{$var};

  $orig_oid = encode_oid(split /\./, $snmpget::OIDS{$var});
  $enoid=$orig_oid;
  srand();
  my $session = SNMP_Session->open ($host ,
                                 $community,
                                 $port);
  for(;;)  {
    if ($session->getnext_request_response(($enoid))) {
      $response = $session->pdu_buffer;
      ($bindings) = $session->decode_get_response ($response);
      ($binding,$bindings) = decode_sequence ($bindings);
      ($next_oid,$value) = decode_by_template ($binding, "%O%@");
      # quit once we are outside the table
      last unless BER::encoded_oid_prefix_p($orig_oid,$next_oid);
      $tempo = pretty_print($value);
      #print "$var: '$tempo'\n";
      $tempo=~s/\t/ /g;
      #print "$var: '$tempo'\n";
      $tempo=~s/\t/ /g;
      $tempo=~s/\n/ /g;
      $tempo=~s/^\s+//;
      $tempo=~s/\s+$//;
      push @table, $tempo;

    } else {
      die "No answer from $ARGV[0]\n";
    }
    $enoid=$next_oid;
  }
  $session->close ();
  return (@table);
}

###### MAIN LOOP ######

getopts("c:dD:hp:P:s:S:v");

print_usage() if $opt_h;

my($community,$router) = split /\@/, $ARGV[0];
print_usage unless $community && $router;

my $timenow = time();
(my $sec, $min, $hour, $day, $month, $year) = (localtime($timenow))[0,1,2,3,4,5];
my $count = 0;
my $basename = ($opt_b ? $opt_b : "weblogicator");
my $outputdir = ($opt_D ? $opt_D : "/local/home/perfboy/orca/var/orca/weblogicator");
$outputdir =~ s/\/$//g; # Strip trailing slash assuming there is one
my $logfileobj;

if( $outputdir eq "STDOUT" ) {

   $logfileobj = OutputLog::new($outputdir, "", "", "", "");

} else {

   $outputdir = generate_outputdir($outputdir, $router);
   recursive_mkdir($outputdir) if( ! -d $outputdir );

   $logfileobj = OutputLog::new($outputdir, $basename, $year + 1900, $month + 1, $day);
   $logfileobj->determineoutputlogfilename();

}

my $maxcount = ($opt_c ? $opt_c : -1);

my $sessionID = 49;
if( $opt_s ) {
   print_usage if( $opt_s != 1 && $opt_s != 2);
   $sessionID = 50 if( $opt_s == 2);
}

my $port = ($opt_p ? $opt_p : 161);
my $pid_file = ($opt_P ? $opt_P : '/var/tmp/weblogicator.pid');
my $sleeptime = ($opt_S ? $opt_S : 60);
local $verbose = ($opt_v ? $opt_v : 0);

if( $opt_d ) {
   die "Demon mode specified - you cannot specify STDOUT in daemon mode." if( $outputdir eq "STDOUT");
   print "Going daemon mode ... L8R !!!!\n";
   my $pidfh = open_pid_file( $pid_file );
   my $pid = become_daemon();
   print $pidfh $pid;
   close $pidfh;
} else {
   print STDERR "Output to " . $logfileobj->getoutputlogfilename() . "\n" if( $verbose );
}

my $outfh = $logfileobj->openoutputlog();
#
# Select $outfh so all the print statements don't need
# filehandle specified.
#
select $outfh;
$| = 1;  # Disable output buffering on $outfh

#
# The following OIDs are being collected.  The pound sign is either 1 or 2
# depending on how this script is started.
#
# BEA-WEBLOGIC-MIB::serverUptime."ejbCluster"."session#"
# BEA-WEBLOGIC-MIB::serverMaxHeapSpace."ejbCluster"."session#"
# BEA-WEBLOGIC-MIB::serverHeapUsedPct."ejbCluster"."session#"
# BEA-WEBLOGIC-MIB::serverQueueLength."ejbCluster"."session#"
# BEA-WEBLOGIC-MIB::serverQueueThroughput."ejbCluster"."session#"
# BEA-WEBLOGIC-MIB::jdbcMaxCapacity."oraclePool"."session#"
# BEA-WEBLOGIC-MIB::jdbcInitCapacity."oraclePool"."session#"
# BEA-WEBLOGIC-MIB::jdbcCurrentPoolSize."oraclePool"."session#"
# BEA-WEBLOGIC-MIB::jdbcCurrentInUse."oraclePool"."session#"
# BEA-WEBLOGIC-MIB::jdbcTotalPendingConnections."oraclePool"."session#"
# BEA-WEBLOGIC-MIB::jdbcHighwaterPendingConnections."oraclePool"."session#"
# BEA-WEBLOGIC-MIB::jdbcHighwaterWaitTime."oraclePool"."session#"

%snmpget::OIDS = ('serverUptime' => '.1.3.6.1.4.1.140.600.20.1.40.10.101.106.98.67.108.117.115.116.101.114.8.115.101.115.115.105.111.110.' . $sessionID,
'serverMaxHeapSpace' => '.1.3.6.1.4.1.140.600.20.1.60.10.101.106.98.67.108.117.115.116.101.114.8.115.101.115.115.105.111.110.' . $sessionID,
'serverHeapUsedPct' => '.1.3.6.1.4.1.140.600.20.1.65.10.101.106.98.67.108.117.115.116.101.114.8.115.101.115.115.105.111.110.' . $sessionID,
'serverQueueLength' => '.1.3.6.1.4.1.140.600.20.1.70.10.101.106.98.67.108.117.115.116.101.114.8.115.101.115.115.105.111.110.' . $sessionID,
'serverQueueThroughput' => '.1.3.6.1.4.1.140.600.20.1.75.10.101.106.98.67.108.117.115.116.101.114.8.115.101.115.115.105.111.110.' . $sessionID,
'jdbcMaxCapacity' => '.1.3.6.1.4.1.140.600.50.1.15.10.111.114.97.99.108.101.80.111.111.108.8.115.101.115.115.105.111.110.' . $sessionID,
'jdbcInitCapacity' => '.1.3.6.1.4.1.140.600.50.1.20.10.111.114.97.99.108.101.80.111.111.108.8.115.101.115.115.105.111.110.' . $sessionID,
'jdbcCurrentPoolSize' => '.1.3.6.1.4.1.140.600.50.1.25.10.111.114.97.99.108.101.80.111.111.108.8.115.101.115.115.105.111.110.' . $sessionID,
'jdbcCurrentInUse' => '.1.3.6.1.4.1.140.600.50.1.30.10.111.114.97.99.108.101.80.111.111.108.8.115.101.115.115.105.111.110.' . $sessionID,
'jdbcTotalPendingConnections' => '.1.3.6.1.4.1.140.600.50.1.35.10.111.114.97.99.108.101.80.111.111.108.8.115.101.115.115.105.111.110.' . $sessionID,
'jdbcHighwaterPendingConnections' => '.1.3.6.1.4.1.140.600.50.1.40.10.111.114.97.99.108.101.80.111.111.108.8.115.101.115.115.105.111.110.' . $sessionID,
'jdbcHighwaterWaitTime' => '.1.3.6.1.4.1.140.600.50.1.45.10.111.114.97.99.108.101.80.111.111.108.8.115.101.115.115.105.111.110.' . $sessionID );

printf "timestamp locltime serverUptime serverMaxHeapSpace serverHeapUsedPct serverQueueThroughput jdbcMaxCapacity jdbcInitCapacity jdbcCurrentPoolSize jdbcCurrentInUse jdbcTotalPendingConnections jdbcHightwaterPendingConnections jdbcHighwaterWaitTime\n";

while (1) {
   my $timenow = time();
   my ($sec, $min, $hour, $day, $month, $year) = (localtime($timenow))[0,1,2,3,4,5];
   my $timestring = sprintf("%02d:%02d:%02d", $hour, $min, $sec);

   my($serverUptime, $serverMaxHeapSpace, $serverHeapUsedPct, $serverQueueThroughput, $jdbcMaxCapacity, $jdbcInitCapacity, $jdbcCurrentPoolSize, $jdbcCurrentInUse, $jdbcTotalPendingConnections, $jdbcHightwaterPendingConnections, $jdbcHighwaterWaitTime) = snmpget($router, $community, $port, 'serverUptime','serverMaxHeapSpace', 'serverHeapUsedPct', 'serverQueueThroughput', 'jdbcMaxCapacity', 'jdbcInitCapacity', 'jdbcCurrentPoolSize', 'jdbcCurrentInUse', 'jdbcTotalPendingConnections', 'jdbcHighwaterPendingConnections', 'jdbcHighwaterWaitTime');
   $serverUptime =~ s/ /-/g;

   #
   # If the day changes (e.g. 12th becomes the 13th), its times to roll the
   # current log file and compress it.
   #
   if( $logfileobj->{'outputdir'} ne "STDOUT" && $day != $logfileobj->{'day'} ) {
      printf STDERR ("minute %s  objminute %s\n", $min, $logfileobj->{'day'});
      $logfileobj->closeoutputlog;
      $logfileobj->determineoutputlogfilename;
      $logfileobj->{'year'} = $year + 1900;
      $logfileobj->{'month'} = $month + 1;
      $logfileobj->{'day'} = $day;
      $logfileobj->{'fileincrement'} = 0;
      print STDERR "Changing log to " . $logfileobj->getoutputlogfilename() . "\n" if( $verbose );
      $outfh = $logfileobj->openoutputlog();
      select $outfh;
      $| = 1;  # Disable output buffering on $outfh
      printf "timestamp locltime serverUptime serverMaxHeapSpace serverHeapUsedPct serverQueueThroughput jdbcMaxCapacity jdbcInitCapacity jdbcCurrentPoolSize jdbcCurrentInUse jdbcTotalPendingConnections jdbcHightwaterPendingConnections jdbcHighwaterWaitTime\n";
   }

   printf "$timenow $timestring $serverUptime $serverMaxHeapSpace $serverHeapUsedPct $serverQueueThroughput $jdbcMaxCapacity $jdbcInitCapacity $jdbcCurrentPoolSize $jdbcCurrentInUse $jdbcTotalPendingConnections $jdbcHightwaterPendingConnections $jdbcHighwaterWaitTime\n"
;

   $count++ if( $maxcount != -1 );
   if( $count == $maxcount ) {
      $logfileobj->closeoutputlog;
      exit(0);
   }
   sleep $sleeptime;
}

$logfileobj->closeoutputlog;
exit(0);
-------------- next part --------------
#!/bin/ksh

perl weblogicator.pl -d -P /var/tmp/weblogicator-prod-app01.pid notpublic\@prod-app01
perl weblogicator.pl -d -s 2 -P /var/tmp/weblogicator-prod-app02.pid notpublic\@prod-app02

perl weblogicator.pl -d -P /var/tmp/weblogicator-staging-app01.pid notpublic\@staging-app01
perl weblogicator.pl -d -s 2 -P /var/tmp/weblogicator-staging-app02.pid notpublic\@staging-app02

perl weblogicator.pl -d -P /var/tmp/weblogicator-certification-app01.pid notpublic\@certification-app01
perl weblogicator.pl -d -s 2 -P /var/tmp/weblogicator-certification-app02.pid notpublic\@certification-app02
-------------- next part --------------
A non-text attachment was scrubbed...
Name: weblogicator.cfg
Type: application/octet-stream
Size: 5411 bytes
Desc: not available
URL: </pipermail/orca-users/attachments/20030408/1a34631a/attachment.obj>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: weblogicator-2002-11-14-000
Type: application/octet-stream
Size: 6324 bytes
Desc: not available
URL: </pipermail/orca-users/attachments/20030408/1a34631a/attachment-0001.obj>


More information about the Orca-users mailing list