[kepler-dev] recommendations for automated actor tests

Timothy McPhillips tmcphillips at mac.com
Sun Mar 12 19:49:27 PST 2006


Hi Christopher,

Great comments!

Although I tried to make my suggestions as unambiguous as possible, I  
do not consider this a comprehensive test plan.  It's not even close,  
as your comments illustrate.  Rather, the plan is meant to be highly  
focused on the particular objective we set during the last hour of   
the February Kepler developers' meeting (summarized in the  
"motivation" section of the plan).

I suggested the term "actor tests" to distinguish them from other  
types of "test workflows", demo workflows, etc.  I, too, generally  
focus primarily on unit tests (and I earnestly hope that in the  
future we will have comprehensive unit tests, based on JUnit for  
example, for every class).  And I agree that these "actor tests" are  
not unit tests.  I tend to think of actors as workflow "components",  
so in my mind actor tests fall into that murky "component test"  
category that sometimes pops up between unit and integration tests.   
But we should junk the "actor test" terminology if it is confusing.  
It is true, too, that the approach I outlined uses Kepler as a  
framework to test Kepler actors.  However, my hope is that each of  
these tests will be designed to test a single actor in particular  
(though other parts of the workflow, or the system as a whole, may of  
course fail as well).  We'll have to see how practical this is.

It is true that code coverage analysis is not part of the plan.  And  
I agree we need to start doing code coverage analysis as soon as  
possible.  However, I think that like the unit tests, this falls  
outside the scope of what we agreed to do before the 1.0 release.

Finally, it is true that my initial recommendations preclude testing  
of graphical actors that cannot provide equivalent non-graphical  
output.  We certainly can include smoke-tests for such actors, as you  
suggest.  However, I think we should flag these tests as being  
somewhat deficient.  I'd particularly like to avoid workflows that  
simply "test" an actor by sending it's output to a display actor.  In  
other words, my hope is that smoke tests would be used exclusively  
for actors that cannot be tested "automatically" any other way.  It  
will be interesting to see how many Kepler actors (particularly those  
not provided by Ptolemy) can, in fact, be tested effectively using  
workflow-based tests at all.

In the longer term, I'd very much like to participate in addressing  
all of the issues you brought up and more.  Let's see what the others  
have to say about the short term objective (i.e., what we're going to  
do this month).

Cheers,

Tim


On Mar 12, 2006, at 11:26 AM, Christopher Brooks wrote:

> Hi Tim,
> You suggestions look very comprehensive.
>
> A note about terminology:
>
> The term "Actor tests" implies unit tests (actors being an atomic unit
> of execution), which test workflows are not.  A test system that
> includes the code under test usually called a system test or
> integration test.  System tests will catch different bugs than unit
> tests.  Personally, I prefer to emphasize unit tests over system tests
> because it is easier to comprehensively test error conditions and
> boundary conditions.  Usually one writes unit tests using something
> like JUnit.  I do think that both unit tests and system tests are
> necessary, I just focus on unit tests over system tests. I think
> Edward prefers test workflows over unit tests.  I'm fine with "actor
> tests" though.
>
> One aspect that is missing is that I don't see any code coverage in
> the plan.  Locally we use an obsolete code coverage product from Sun,
> I'm sure there are other products out there.  Code coverage will
> tell us which actors have no tests and which methods and blocks are
> untested.  100% code coverage does not mean that a program is tested,
> but without some form of code coverage it is difficult to have any
> idea of how well tested a piece of code is.  Perhaps the Kepler
> nightly build has code coverage in it already?
>
> Another aspect that needs to be addressed is handling graphical tests
> and actors.  Currently, the Ptolemy II tests are non-graphical.
> There are two reasons:
>  1) We want to be able to ship non-graphical systems.
>  2) Graphical testing that includes regressive tests is difficult.
>     It is easy to smoke test a system that pops up a window, it is
>     hard to say that the window that was popped up is correct.
>
> How do you intend to test graphical systems?  Are tests allowed to
> pop up windows?
>
> _Christopher
>
>
>
>
>
>
>
> --------
>
>     All,
>
>     I volunteered to oversee some aspects of actor testing in  
> preparation
>     for the 1.0 release.  In particular, I will be inspecting the  
> actor
>     tests listed in lib/test-workflows.lst to verify that they test  
> the
>     actors we include in the release.  I indicate my  
> recommendations for
>     writing actor tests below.
>
>     Note that while we have sometimes used the term "test  
> workflows," we
>     are actually trying to test particular actors here, not workflows.
>     (We are using workflows to test actors.)  For this reason, I  
> refer to
>     workflows meant to test actors as "actor tests".
>
>     Please let me know if you have any comments, corrections, or
>     alternative suggestions.  I will post an updated version of the
>     material below to the Kepler wiki once everyone has had a  
> chance to
>     comment.
>
>     Thanks!
>
>     Tim
>
>
>
>     *** Motivation ***
>
>     Why include automated tests for Kepler actors?  Two reasons  
> have been
>     suggested:
>
>     1.  Existence of a working test for an actor will be the  
> criteria for
>     including an actor in the 1.0 release.
>     2.  We want tests for all actors included in the 1.0 release.
>
>     Comments:  The first motivation implies that it is practical to  
> write
>     automated tests for all actors in 1.0, while the second motivation
>     might imply that we will deliver well-tested actors exclusively.
>     While neither of these implications seem entirely probable,  
> we'll do
>     the best we can.  Note also that there are additional good  
> reasons to
>     have actor tests, but we will not focus on these here.
>
>
>
>     *** Assumptions ***
>
>     I have made several assumptions about our actor tests and testing
>     framework.  They are the basis for the recommendations that  
> follow.
>
>     1.  We want fully automated tests only.   In particular, the  
> nightly
>     build must run each actor test, and the nightly build report must
>     accurately report test success or failure on an actor-by-actor  
> basis.
>     2.  We assume that all actors provided with Ptolemy II are well-
>     tested.  Consequently, we do not require automated tests for  
> Ptolemy
>     II actors in Kepler. (Contributing tests for these actors is
>     encouraged, however!)
>     3.  Actor tests should be runnable not just on the nightly build
>     system, but also on developers' systems.
>     4.  Collectively, the actor tests should exercise the key  
> function(s)
>     provided by each actor.
>     5.  It should be clear what actor(s) each automated test is  
> intended
>     to test.
>     6.  It should be relatively straightforward to determine from a  
> test
>     failure which actor failed.
>     7.  Actor tests are not expected to test Kepler code outside of  
> actors.
>     8.  Successfully testing an actor with a particular director  
> does not
>     imply that the actor works with any other director.
>     9.  We want to compose, document, and maintain actor tests  
> using the
>     Kepler GUI.
>
>     Comments:  The first assumption excludes the possibility of fully
>     testing actors that generate graphical output or depend on user
>     interaction.  We will need a different criterion for including  
> such
>     actors in Kepler 1.0.  The second assumption also requires a  
> distinct
>     mechanism for deciding what actors to include in 1.0.
>
>
>
>     *** Recommendations ***
>
>     1.  Focus each actor test on testing a single actor.  There may be
>     more than one test for a particular actor.  The name of the test
>     should indicate which actor is being tested as well as the  
> specific
>     nature of the test.  I suggest a name such as  
> "ArrayLength_SDF.moml"
>     to imply that only one test under SDF is required, and a name like
>     "ArrayAppend_SDF_two_channels.moml" to indicate one of several
>     possible tests for an actor.
>
>     2. Actor tests may be included anywhere in the kepler repository.
>     The relative path to each actor test (e.g., workflows/test/test-
>     ecogrid-eml-gce-data.xml) should be included in the file lib/test-
>     workflows.lst.  This is the file used by the nightly build  
> script to
>     determine what actor tests to run.   Maintain an alphabetical  
> order
>     of the test (i.e., actor) names (not the relative paths).
>
>     3.  Before including a test in the nightly build, test it locally
>     using the ptexecute ant target in build.xml, i.e., by issuing a
>     command such as:
>
>                ant ptexecute -Dworkflow=workflows/eco/Elk_Wolf.xml
>
>     4.  Any local data files required for the test should be  
> included in
>     the kepler repository.  Within the actor test, specify the path to
>     any such files relative to the kepler project directory.
>
>     5.  Where possible, use the Test, NonStrictTest, and TypeTest  
> actors
>     to validate actor output (see documentation for these actors).
>
>     6.  Use the minimum number of actors and connections required  
> to test
>     a particular function of an actor.
>
>     7.  Document each actor test internally (e.g., using a  
> TextAttribute
>     a.k.a. "Annotation").  Indicate what function of the actor is  
> being
>     tested, how its inputs are prepared, how its outputs are  
> validated,
>     etc.  Strive to make it easy to determine what has gone wrong  
> when a
>     test fails.
>
>     8.  Document what external applications are required to  
> successfully
>     run the test, and how to acquire installers for these  
> applications.
>     Make sure that required applications are installed on the nightly
>     build system before committing the test to the repository.
>
>     9.  Include your name and e-mail address in the documentation for
>     each test.  I will be examining the actor tests periodically and
>     contacting authors of tests that I cannot understand.
>
>
>
>     *** Additional open questions  to consider ***
>
>     1.  How many Kepler actors can be adequately tested using this  
> approach?
>     2.  What additional actors or tools might make it easier to test
>     Kepler actors?
>
>
>
>     _______________________________________________
>     Kepler-dev mailing list
>     Kepler-dev at ecoinformatics.org
>     http://mercury.nceas.ucsb.edu/ecoinformatics/mailman/listinfo/ 
> kepler-dev
> --------




More information about the Kepler-dev mailing list