[kepler-users] token matching mixup due to unmatched boolean switches

Colin Enticott Colin.Enticott at infotech.monash.edu.au
Fri Oct 16 03:13:58 PDT 2009


Hi Tirath,

Yes, you'll have to tag the tokens so the environment knows to match
them. You will need to add a new parameter to the source actor called
"createNewTags" as type ptolemy.data.expr.SingletonParameter (yes,
very messy, but I now have an actor that will do it) and set it to
"true". When that actor sends a token, each one will be individually
tagged. If you want anything thing else such as reordering, you'll
need to update the version of the director. Let me know.

Also, if some of those actors can run simultaneously within different
threads, let me know. You just have to add another parameter for most
actors, but I'm just sorting out some issues in another thread with
Edward (mainly about composite actors).

Regards,
Colin

2009/10/16 Tirath Ramdas <tramdas at oci.uzh.ch>:
> Hi Colin,
>
> I just gave it a shot (with the exact same workflow, except that the PN
> Director is replaced by the Nimrod Director) and unfortunately things still
> don't work as I would like. Not only are the results incorrect, they tend to
> alternate between two outcomes every time I run the workflow.
>
> Note that this is with the workflow here:
>>>>
>>>> http://pastebin.com/m2faecfb3
>
>
> Most of the time the results look like this:
> {1, 0}
> {2, 1}
> {3, 2}
> {4, 3}
> {5, 4}
> {7, 5}
> {8, 6}
> {9, 7}
>
> Sometimes the results look like this (PN Director does the same):
> {0, 0}
> {1, 1}
> {2, 2}
> {3, 3}
> {4, 4}
> {6, 5}
> {7, 6}
> {8, 7}
> {9, 8}
>
> Recall the desired result:
>>>>
>>>> {0, 0}
>>>> {1, 1}
>>>> {2, 2}
>>>> {3, 3}
>>>> {4, 4}
>>>> {6, 6} <- desired result
>>>> {7, 7} <-
>>>> {8, 8} <-
>>>> {9, 9} <-
>
>
> The TDA idea is a very nice one, and in theory I can see it should have
> worked, so it must just be an implementation/bug issue? Perhaps the workflow
> has to be modified to suit the TDA Director? I'm not sure how though, it's a
> pretty simple workflow...
>
> One caveat: my version of the TDA director is not very recent, when you have
> time could you give it a try with your most recent version?
>
> regards,
> -tirath
>
>
>
>
> On 16/10/2009, at 3:18 AM, Colin Enticott wrote:
>
>> Hi Tirath,
>>
>> You could also use the TDA/NimrodK director that does token tagging
>> and matching. But, as you know, it is a separate download.
>>
>> Regards,
>> Colin
>>
>> 2009/10/16 Edward A. Lee <eal at eecs.berkeley.edu>:
>>>
>>> The problem here is that you have an implicit notion of time,
>>> and that notion is absent in PN.
>>>
>>> I suggest using the SR director.
>>>
>>> Edward
>>>
>>>
>>> Tirath Ramdas wrote:
>>>>
>>>> Hi all,
>>>>
>>>> I have a situation which I have been able to address in a "kludgy" way,
>>>> but I wonder if there may be a better way, so I am presenting the
>>>> problem
>>>> and my proposed solution here for your critique and counter-suggestions.
>>>> I
>>>> am sure it's a situation that many others have encountered. This is with
>>>> the
>>>> PN director: I'm not sure how, if at all, the DDF director might help.
>>>> Anyway, here goes...
>>>>
>>>> THE PROBLEM:
>>>>
>>>> Let's say I have a single source of data that has to go through 3
>>>> actors:
>>>> A, B, and C. They are laid out in a fork/join configuration. The
>>>> DataSource
>>>> goes to A and B, and the outputs from those two actors go to C. In dot
>>>> notation, this is what my business logic graph looks like:
>>>>
>>>> DataSource -> TaskA;
>>>> DataSource -> TaskB;
>>>> TaskA -> TaskC;
>>>> TaskB -> TaskC;
>>>>
>>>> This is a trivial workflow. But now, I want to make my TaskA actor more
>>>> sophisticated and detect failures in the computation (I mean business
>>>> logic
>>>> failures - not the kind of thing that can be fixed by re-execution). I
>>>> will
>>>> include in TaskA a boolean switch to detect successful jobs and push
>>>> tokens
>>>> to the output port only when they are good. But the important thing is
>>>> that
>>>> one bad piece of data should not stop the workflow: the rest of the data
>>>> gets processed as usual.
>>>>
>>>> The problem is, TaskB never knows when one of TaskA's jobs fails, and it
>>>> dutifully pushes all of it's tokens through. As a result, TaskC gets
>>>> mixed
>>>> up tokens.
>>>>
>>>> A contrived but indicative example: let's say my DataSource is a Ramp
>>>> producing 1 to 9, TaskB is simply an expression that pushes all tokens
>>>> through, and TaskA has an "error condition" where a token value "5" is
>>>> considered an error and routed to the error port instead of the normal
>>>> output port. Task C just merges both it's inputs into a 2-element array.
>>>> A
>>>> sample Kepler workflow is here:http://pastebin.com/m2faecfb3
>>>>
>>>> What happens is this:
>>>>
>>>> {0, 0}
>>>> {1, 1}
>>>> {2, 2}
>>>> {3, 3}
>>>> {4, 4}
>>>> {6, 5} <- problem starts here
>>>> {7, 6} <-
>>>> {8, 7} <-
>>>> {9, 8} <-
>>>>
>>>> What I want to see is this:
>>>>
>>>> {0, 0}
>>>> {1, 1}
>>>> {2, 2}
>>>> {3, 3}
>>>> {4, 4}
>>>> {6, 6} <- desired result
>>>> {7, 7} <-
>>>> {8, 8} <-
>>>> {9, 9} <-
>>>>
>>>> MY KLUDGE:
>>>>
>>>> I hacked around this simply by passing all of TaskB's results through
>>>> "passthrough" ports in TaskA, so that TaskA's condition checking can be
>>>> effectively applied to TaskB's results as well. This is ugly and
>>>> seriously
>>>> detracts from the business process flow that I want to express.
>>>>
>>>> I don't want to make TaskC responsible for doing TaskB's error checking:
>>>> what if my TaskC is a very generic actor? In practice TaskB's
>>>> error-checking
>>>> is highly application-specific (grep-ing the output of a computational
>>>> chemistry legacy app).
>>>>
>>>> However, I did consider a more generic approach to exception handling
>>>> that
>>>> would in fact place the burden on TaskC: I considered the possibility of
>>>> mandating that every actor must output a record which contains the
>>>> output
>>>> data and also a "predication" [1] field. The predication field indicates
>>>> when the data is valid. Any actor receiving tokens only proceeds with
>>>> the
>>>> computation if the predication fields on all it's inputs are set to
>>>> valid;
>>>> if even one is invalid, the whole lot gets routed to an error bin, but
>>>> the
>>>> next lot gets processed as though nothing went wrong. Also I vaguely
>>>> recall
>>>> reading that some other workflow engine does something like this.
>>>> Anyway, I
>>>> did not proceed with this yet because it sounds like a non-trivial
>>>> amount of
>>>> work to modify all the actors I use to adhere to this behavior.
>>>>
>>>> Any other ideas?
>>>>
>>>> regards,
>>>> -tirath
>>>>
>>>> [1] Borrowing the word "predication" from Intel's Itanium branch
>>>> misprediction handling, not sure who they borrowed the term from.
>>>>
>>>> _______________________________________________
>>>> Kepler-users mailing list
>>>> Kepler-users at kepler-project.org
>>>> http://mercury.nceas.ucsb.edu/kepler/mailman/listinfo/kepler-users
>>>
>>> _______________________________________________
>>> Kepler-users mailing list
>>> Kepler-users at kepler-project.org
>>> http://mercury.nceas.ucsb.edu/kepler/mailman/listinfo/kepler-users
>>>
>>>
>>
>>
>>
>> --
>> Colin
>
>



-- 
Colin



More information about the Kepler-users mailing list