[kepler-dev] Memory leak.

Kevin Ruland kruland at ku.edu
Wed Dec 21 19:26:11 PST 2005


Christopher.

Sounds like a good solution.

I think Sun disabled the ability to intentionally call the gc in the 1.3 
era - but I could be mistaken.  The calls are still there but pretty 
much disfunctional.  However it is reassuring that the memory usage went 
up before and after change.

All.

I check memory leaks by using -Xrunhprof.  -Xrunhprof:help (Note the 
colon) lists the arguments.  My preferred combination is 
-Xrunhprof:cutoff=0.005,depth=15.  There are a number of resources on 
the web describing runhprof although it has now been superceded by 
jvmapi (or something) which gives greater control.  The brief rundown is 
the report generated in the file java.hprof.txt (which for me was 
between 100M-250M depending on hprof options) is divided into three parts:

Stack traces -

Allocation/object information -

Active objects -

I only know how to use the first and third sections...  There might be 
an option to hprof to omit the second section, I don't really know.

Every stack which allocates memory with an active reference at program 
exit is represented.  These stacks are called "traces".

The active object section tells you the collective size of the objects 
"leaked" at each trace.  They are in descending order by total size.

I tend to follow this procedure.  Find the largest culpret in the active 
object section.  Identify it's trace number which is in the second to 
the last column (it doesn't format the columns very well).  Suppose it's 
10041.  I then search backwards through the file looking for the string 
"TRACE 10041:"  (note the all caps and colon)  This then gives you the 
stack trace which allocated all that memory.

The arguments I give it:

cutoff=0.001 means don't report on objects who's total is less than 0.5% 
of the total memory allocated.

depth=15 means generate stack traces 15 frames deep.  The default for 
this is 4, that's almost never enough.

One final note.  Sometimes hprof gets confused and reports entries for 
stack trace 0.  I haven't found the answer to this.  Sometimes using 
-verbose helps.

Kevin


Christopher Brooks wrote:

>Hi Kevin,
>
>I modified MoMLParser:
>
> Renamed _parser to _xmlParser so as to differentiate it from the "_parser"
> attribute.
> parser(URL, Reader) now sets _xmlParser at the start to a new XmlParser
> and then sets _xmlParser to null in a finally block.
> Moved calls to _xmlParser.getLineNumber() and getColumnNumber() to private
> methods that return -1 if _xmlParser is null.
>
>As a poor test of start up time, I created a simple shell script
>called "doit" that invoked Kepler by calling java and passing the
>proper classpath and command line args and then ran "time sh doit" and
>then clicked the close button as quickly as I could.
>
>I ran the script once and tossed that first run.
>I then got the following times in seconds: 22.292 21.875 21.975
>The average is 22.047 seconds
>
>I then recompiled with my MoMLParser work, tossed the first run and
>got:                                       21.132 20.975 21.317
>The average is 21.141 seconds
>
>So, it would appear that the change does not slow things down during
>start up.  I can see how it would slow things down if something
>triggers logs of MoMLChangeRequests where the body is written in xml.
>
>BTW: Vergil starts up in: 4.772 4.555 4.617 
>The average is 4.648 seconds.
>
>
>To measure the memory usage, I started Kepler and in a Graph Editor,
>View -> JVM Properties shows that we are using.
>
>Before the MoMLParser change:
>Memory: 119228K Free: 37192K (31%) Max: 520256K (23%)
>After I hit the Garbage collection button:
>Memory: 141764K Free: 62659K (44%) Max: 520256K (27%)
>
>After the MoMLParser change:
>Memory: 72156K: Free: 14493K (20%) Max: 520256K (14%)
>If I do a garbage collection, I get:
>Memory: 94556K Free: 41816K (44%) Max: 520256K (18%)
>
>I'm not sure why the amount of memory goes up after gc, this could be
>something to look at.
>
>The MoMLParser change does appear to reduce the amount of memory
>needed at start up.
>
>  
>



More information about the Kepler-dev mailing list