[kepler-dev] [Bug 3115] - Need to check Kepler for memory leaks
bugzilla-daemon at ecoinformatics.org
bugzilla-daemon at ecoinformatics.org
Thu Jan 31 14:10:56 PST 2008
http://bugzilla.ecoinformatics.org/show_bug.cgi?id=3115
------- Comment #1 from cxh at eecs.berkeley.edu 2008-01-31 14:10 -------
This thread concerning memory leaks might be of use:
Jackie wrote:
I narrowed down the memory leak a bit and found out one main cause. It looks
like it's coming from executing MoML change requests. For example, issuing
multiple calls like NamedObj.requestChange(new MoMLChangeRequest(....))
would quickly overflow the memory.
--jackie
----- Original Message -----
From: "Christopher Brooks" <cxh at eecs.berkeley.edu>
To: "Jackie Man-Kit Leung" <jleung at berkeley.edu>
Cc: <ptresearch at chess.eecs.berkeley.edu>
Sent: Friday, November 30, 2007 2:51 PM
Subject: Re: Increasing memory usage after running multiple rounds of test
> Hi Jackie,
> Welcome to the world of memory leaks.
> See $PTII/doc/coding/performance.htm
>
> Calling MoMLParser.reset and MoMLParser.purgeAllModelRecords()
> might not free up all memory. It is easier to look
> and see what memory is leaking and fix it than it is
> to try to figure it out without leakage data.
>
> Definitely try to work within a non-gui environment at first.
> Using the test environment is the way to go.
>
> There are several products available.
>
> Check out HP's JMeter, which is free.
>
> See also JProfiler and JProbe.
> We can buy copies of these products, though it might take awhile
> to do so.
>
> Can you look over $PTII/doc/coding/performance.htm
> and updated it as necessary?
>
> _Christopher
>
> --------
> Christopher,
>
> I have been chasing a memory leak problem that causes the tcl script to
> crash during regression testing. The jvm basically throws a
> OutOfMemoryException and stop functioning for subsequent tests. This
> happens even if i call MomlParser.purgeAllModelRecord() and
> MomlParser.reset() in-between all the tests. Some memory is not being
> released but i am not sure what. I put in some print statements and
> found that the amount of memory leak is highly non-uniform (i.e. some
> model tests have zero leak). Plus, i found that the order in which these
> models get run change the amount of leak for a particular model test as
> well.
>
> I think there are two possibilities: one is because Ptolemy II is
> caching the actors, which is unlikely because i think calling
> MomlParser.purgeAllModelRecord() and MomlParser.reset() would have
> solved the problem, if that's the case. Another possibility i thought of
> is that some of the tokens may be cached by the software so it can reuse
> them for future computation and be efficient. However, i don't seem find
> any code doing that. Any suggestions?
>
> --jackie
>
More information about the Kepler-dev
mailing list