[kepler-dev] Distributed SDF & Defence

Chad Berkley berkley at nceas.ucsb.edu
Mon Dec 17 14:15:38 PST 2007


Hi Christopher and Daniel,

We aren't using the distributedSDFDirector at all.  We have our own 
scheme whereby you place any part of a workflow that you want 
distributed into a DistributedCompositeActor (DCA), then choose which 
hosts you want that DCA to be distributed to.  Each host that you can 
distribute execution to runs a slave controller instance which is 
basically an RMI listener that can run kepler on the slave host.  You 
can read more about the work we've done and here:
http://www.kepler-project.org/Wiki.jsp?page=WorkingDistributedFeatures
and here:
http://www.kepler-project.org/Wiki.jsp?page=DistributedKepler

It's currently still what I would consider alpha level software.  It 
works for me and a few other people, bit it's not really ready to go 
into the wild yet.  Setup is pretty straight forward though and I've had 
it running on our ROCKS cluster at NCEAS.  Let me know if you have any 
questions or want to get it running on your own machines.

chad

Christopher Brooks wrote:
> Hi Chad,
> 
> Daniel might have already contacted you about this, but can you say
> a few words about Kepler's use of the distributed sdf work?
> 
> _Christopher
> 
> ------- Forwarded Message
> 
> 
> From: Daniel <kapokasa at es.aau.dk>
> To: ptresearch <ptresearch at chess.eecs.berkeley.edu>
> Subject: [Ptolemy] Distributed SDF & Defence
> Date: Sun, 16 Dec 2007 14:23:34 +0100
> 
> Dear All,
> 
> I would like to announce that I will be defending my Ph.D. thesis 
> entitled "Automated Distributed Simulation in Ptolemy II" on the 6th of 
> February 2008. I would like to know if you are aware of any users or 
> work in progress around the Distributed SDF work to mention at the defence.
> 
> Greetings from my new location, Madrid, Spain.
> Daniel
> _______________________________________________
> Ptolemy maillist  -  Ptolemy at chess.eecs.berkeley.edu
> http://chess.eecs.berkeley.edu/ptolemy/listinfo/ptolemy
> 
> ------- End of Forwarded Message
> 
> 


More information about the Kepler-dev mailing list