[kepler-dev] introduction & distributed ptolemy

Tobin Fricke tobin at splorg.org
Fri Jun 11 17:48:07 PDT 2004


I've just started working at UCSD (SDSC/IGPP) on a project to integrate
Ptolemy/Kepler and our existing network of geophysical data sources. I
thought I'd introduce myself and also ask a few questions. I'm just
getting started, so references to prior work would be very much
appreciated.

Our data sources provide streams of waveform data from various sensors,
which are staged on servers called ORBs.  The simplest way of integrating
these into Ptolemy is to provide an actor which connects to the ORB and
provides the waveform data to Ptolemy as a sequence of tokens representing
samples.  Then data analysis workflows can be developed using the standard
Ptolemy/Kepler components.

It might be desirable to distribute this workflow across the network, to
utilize CPU cycles and link capacity most effectively.  There are
references to such schemes sprinkled about in the Ptolemy documentation,
but I am curious what has been accomplished already and what would be an
advisable avenue to pursue.  "E-Ptolemy" seems very much along the lines
of what I would be interested in pursuing, but I have only found an
abstract describing that work.

A basic question I have is, is there a defined network transport for
Ptolemy relations?  I expect that this question isn't really well-formed
as I still have some reading to do on how relations actually work.
Nonetheless, there is the question of, if we have different instances of
Ptolemy talking to each other across the network, how are the data streams
transmitted?  In our case one option is to use the ORB as the stream
transport, equipping each sub-model with ORB source and ORB sink
components; and perhaps this could be done implicitly to automatically
distribute a model across the network.  But this line of thinking is
strongly tied to the idea of data streams and may not be appropriate for
the more general notion of relations in Ptolemy.

In summary, hello, and any references to distributed ptolemy schemes would
be appreciated.

Tobin Fricke

San Diego Supercomputer Center
(formerly Berkeley EECS)




More information about the Kepler-dev mailing list