[Orca-users] hitachi 9980

Dmitry Berezin dmitryb at rutgers.edu
Wed Jul 21 03:18:12 PDT 2004


FWIW -
I have had similar problem some time ago. Upgrading both perl and orca to the latest versions solved the problem.

----- Original Message -----
From: "Saladin, Miki" <lsaladi at uillinois.edu>
Date: Tuesday, July 20, 2004 7:45 pm
Subject: [Orca-users] hitachi 9980

> We recently installed a Hitachi 9980 disk array into our center, 
> and as a
> result orca (0.264, orcallator.cfg file version 1.36) seems to have
> developed a  memory leak.
> Only 10 domains are being graphed by this orca process so we are 
> not talking
> about large numbers. On the domain on which orca runs - memory 
> swap space
> usage was absolutely constant until this device was introduced. The
> introduction of this device, of course, significantly increased 
> the number
> of disks to be graphed - for multiple domains. 
> here are some top commands. the first one is about 5 hours after 
> startingorca - 
> load averages:  0.95,  0.66,  0.57                              
> 16:34:4178 processes:  76 sleeping, 2 on cpu
> CPU: 40.1% idle, 48.6% user,  4.5% kernel,  6.8% iowait,  0.0% swap
> Memory: 2048M real, 1306M free, 356M swap in use, 2135M swap free
> 
>   PID USERNAME THR PRI NICE  SIZE   RES STATE    TIME    CPU COMMAND
>  8360 root       1   0    0  294M  241M cpu/0  151:24 48.25% perl
> <<<<<<<<<<<<<<<<<< ORCA 
>  9124 lsaladi    1  58    0 2344K 1744K cpu/1    0:33  0.07% top
>   352 root      13  58    0 4040K 2600K sleep    6:13  0.04% syslogd
>   373 root       1  58    0 1064K  720K sleep    0:00  0.01% utmpd
>   726 root       1  58    0    0K    0K sleep    1:01  0.00% 
> se.sparcv9   353 root       1  54    0 2016K 1360K sleep    0:27  
> 0.00% cron
>   624 root       1  59    0   27M   66M sleep    0:06  0.00% Xsun
>  9115 root       1  58    0 5192K 3216K sleep    0:03  0.00% sshd2
>   828 root       6  49    0 9904K 6224K sleep    0:02  0.00% 
> dtsession    20 root       1  58    0 8688K 6560K sleep    0:02  
> 0.00% vxconfigd
>  5567 root       1  58    0 5200K 3224K sleep    0:02  0.00% sshd2
>   834 root       8  59    0 9312K 6496K sleep    0:02  0.00% dtwm
>   753 root       1   0    0 5248K 2032K sleep    0:01  0.00% perl
> 18522 root       1  58    0 5296K 3360K sleep    0:01  0.00% sshd2
>   389 root       1   0    0 1936K 1328K sleep    0:00  0.00% 
> vxconfigba
> this is the next day in the morning - at noon - same running orca 
> process -
> 
> load averages:  0.54,  0.56,  0.50
> 12:07:43
> 72 processes:  70 sleeping, 2 on cpu
> CPU states: 49.6% idle, 49.0% user,  1.4% kernel,  0.0% iowait,  
> 0.0% swap
> Memory: 2048M real, 743M free, 1203M swap in use, 1287M swap free
> 
>   PID USERNAME THR PRI NICE  SIZE   RES STATE    TIME    CPU COMMAND
>  8360 root       1   0    0 1144M  817M cpu/1  641:59 38.44% perl
> <<<<<<<<<<<<<<<<<<<<<<ORCA
> 11358 root       1  58    0 2296K 1696K cpu/0    0:00  0.16% top
>   352 root      13  58    0 4040K 2600K sleep    6:49  0.16% syslogd
>  9124 lsaladi    1  58    0 2344K 1744K sleep    2:39  0.08% top
>   373 root       1  58    0 1064K  720K sleep    0:02  0.02% utmpd
>  9115 root       1  59    0 5192K 3216K sleep    0:15  0.00% sshd2
> 11114 apache     3  58    0 3344K 2312K sleep    0:00  0.00% httpd
> 11121 apache     3  58    0 3344K 2296K sleep    0:00  0.00% httpd
>   726 root       1  58    0    0K    0K sleep    1:05  0.00% 
> se.sparcv9.5.8   353 root       1  54    0 2016K 1360K sleep    
> 0:30  0.00% cron
>  5567 root       1  58    0 5208K 3232K sleep    0:06  0.00% sshd2
>   624 root       1  59    0   27M   66M sleep    0:06  0.00% Xsun
>   828 root       6  49    0 9904K 6224K sleep    0:02  0.00% 
> dtsession    20 root       1  58    0 8688K 6560K sleep    0:02  
> 0.00% vxconfigd
>   834 root       8  59    0 9312K 6496K sleep    0:02  0.00% dtwm
> 
> i know there have been memory leaks reported on this list for orca 
> 0.27 so
> i'm not sure that's the solution. Any suggestions would be 
> appreciated. 
> As you can see above - space is going going and will once again 
> soon be
> gone. Last time it took around 42 hours before ORCA crashed with 
> Out of memory during "large" request for 266240 bytes, total 
> sbrk() is
> 2545103736 bytes at /usr/local/lib/Orca/ImageFile.pm line 318. 
> 
> 
> _______________________________________________
> Orca-users mailing list
> Orca-users at orcaware.com
> http://www.orcaware.com/mailman/listinfo/orca-users
> 




More information about the Orca-users mailing list