[kepler-dev] Memory leak.

Christopher Brooks cxh at eecs.berkeley.edu
Thu Dec 22 13:11:22 PST 2005


Hi Richard,

This is an interesting problem.

I hacked up a small test case that I checked in as
moml/test/MoMLParserLeak.java

/** 
 Leak memory in MoMLParser by throwing an Exception.
 <p> Under Java 1.4, run this with:
 <pre>
java -Xrunhprof:depth=15 -classpath "$PTII;." ptolemy.moml.test.MoMLParserLeak
 </pre>
 and then look in java.hprof.txt.

 @author Christopher Brooks
 @version $Id: MoMLParserLeak.java,v 1.1 2005/12/22 19:40:33 cxh Exp $
 @since Ptolemy II 5.2
 @Pt.ProposedRating Red (cxh)
 @Pt.AcceptedRating Red (cxh)
 */
public class MoMLParserLeak {
    public static void main(String []args) throws Exception {
        MoMLParser parser = new MoMLParser();
        try {
            NamedObj toplevel = 
                parser.parse("<?xml version=\"1.0\" standalone=\"no\"?>\n"
                        + "<!DOCTYPE entity PUBLIC \"-//UC Berkeley//DTD MoML 1//EN\"\n"
                        + "\"http://ptolemy.eecs.berkeley.edu/xml/dtd/MoML_1.dtd\">\n"
                        + "<entity name=\"top\" class=\"ptolemy.kernel.CompositeEntity\">\n"
                        + "<entity name=\"myRamp\" class=\"ptolemy.actor.lib.Ramp\"/>\n"
                        + "<entity name=\"notaclass\" class=\"Not.A.Class\"/>\n"
                        + "</entity>\n");
        } finally {
            System.gc();
            System.out.println("Sleeping for 2 seconds for any possible gc.");
            try {
                Thread.sleep(2000);
            } catch (InterruptedException e) {
            }
            
        }
    }
}


If I run it under Java 1.4 with -Xrunhprof, then I think I can see it leaks
memory in that there are references to ptolemy.actor.lib.Ramp left
around.

I can think of two ways to clean this up by catching the exception
in MoMLParser.parse(URL, Reader), cleaning up and rethrowing.

1) Use the undo mechanism.  This seems really tricky.

2) Traverse the objects that have been instantiated and call
setContainer(null) on them.  I tried temporarily hacking this in
by misusing the _topObjectsCreated list and then calling
setContainer(null) on each element in _topObjectsCreated.
Unfortunately, this did not quite work, I still appear to have
references to Ramp.

There are probably other ways as well.
We did resolve/workaround the XmlParser leak by modifying
MoMLParser.parse(URL, Reader) so it instantiates the XmlParser and
then deletes it.  I had to make some other modifications as well, 
the fix is in the CVS repository, see
http://chess.eecs.berkeley.edu/ptexternal.

Two tools to use to track down leaks are the Java 1.4 -Xrunhprof
command and Eclipse's tptp memory tool (http://eclipse.org/tptp/).

Kevin Ruland wrote a pretty good description of -Xrunhprof:

> I check memory leaks by using -Xrunhprof.  -Xrunhprof:help (Note the 
> colon) lists the arguments.  My preferred combination is 
> -Xrunhprof:cutoff=0.005,depth=15.  There are a number of resources on 
> the web describing runhprof although it has now been superceded by 
> jvmapi (or something) which gives greater control.  The brief rundown is 
> the report generated in the file java.hprof.txt (which for me was 
> between 100M-250M depending on hprof options) is divided into three parts:
> 
> Stack traces -
> 
> Allocation/object information -
> 
> Active objects -
> 
> I only know how to use the first and third sections...  There might be 
> an option to hprof to omit the second section, I don't really know.
> 
> Every stack which allocates memory with an active reference at program 
> exit is represented.  These stacks are called "traces".
> 
> The active object section tells you the collective size of the objects 
> "leaked" at each trace.  They are in descending order by total size.
> 
> I tend to follow this procedure.  Find the largest culpret in the active 
> object section.  Identify it's trace number which is in the second to 
> the last column (it doesn't format the columns very well).  Suppose it's 
> 10041.  I then search backwards through the file looking for the string 
> "TRACE 10041:"  (note the all caps and colon)  This then gives you the 
> stack trace which allocated all that memory.
> 
> The arguments I give it:
> 
> cutoff=0.001 means don't report on objects who's total is less than 0.5% 
> of the total memory allocated.
> 
> depth=15 means generate stack traces 15 frames deep.  The default for 
> this is 4, that's almost never enough.
> 
> One final note.  Sometimes hprof gets confused and reports entries for 
> stack trace 0.  I haven't found the answer to this.  Sometimes using 
> -verbose helps.


I'll see if I can come up with a workable solution.

_Christopher


--------

    Hi,
    
    I have embedded the Ptolemy kernel in an OSGi service. This service
    acts as an container for deployed model graphs and can dynamically associat
   e
    actor behavior with service behavior. In our situation, multiple parsed
    models
    (around 30) can coexist.
    
    Before a new model is parsed, I call reset() on MoMLParser. This avoids
    incremental model parsing, that would otherwise create dependencies across
    seperate deployed models. Maybe this is related to the memory leak
    mentioned by Kevin.
    
    A few weeks ago I spent some time profiling our system. Regarding the curre
   nt
    subject, I discovered a memory leak in one of the alternative paths of the
    MoMLParser:
    
    During parsing, the MoMLParser can throw a XMLException if the file content
    is invalid and can throw a ClassNotFoundException if some classes could not
    be resolved. In case one of these exceptions is thrown, the entities
    contained
    by the partial constructed graph are still added to the workspace.
    
    In my opinion the MoMLParser should in that case undo the performed
    workspace actions.
    
    
    Best wishes,
    
    Richard van der Laan, Luminis
    TheWeb: http://www.luminis.nl
    LOSS  : https://opensource.luminis.net
    


More information about the Kepler-dev mailing list