[seek-kr-sms] UI
Rod Spears
rods at ku.edu
Fri Jun 11 07:04:17 PDT 2004
(This is a general reply to the entire thread that is on seek-kr-sms):
In the end, there are really two very simple questions about what we are
all doing on SEEK:
1) Can we make it work?
a) This begs the question of "how" to make it work.
2) Will anybody use it?
a) This begs the question of "can" anybody use it?
Shawn is right when he says we are coming at this from the "bottom-up."
SEEK has been very focused on the mechanics of how to take legacy data
and modeling techniques and create a new environment to "house" them and
better utilize them. In the end, if you can't answer question #1, it
does matter whether you can answer question #2.
But at the same time I have felt that we have been a little too focused
on #1, or at the very least we haven't been spending enough time on
question #2.
Both Nico and Fernando touched on two very important aspects of what we
are talking about. Nico's comment about attacking the problem from
"both" ends (top down and bottom up) seems very appropriate. In fact,
the more we know about the back-end the better we know what "tools" or
functionality we have to develop for the front-end and how best they can
interact.
Fernando's comment touches on the core of what concerns me the most, and
it is the realization of question #2
His comment: "/I also think that the major impediment to an
understanding that requires a paradigm switch is the early idealization
of a graphical user interface/." Or more appropriately known as "the
seduction of the GUI." (Soon to be a Broadway play ;-) ).
We absolutely have to create a tool that scientists can use. So this
means we have to create a tool that "engages" the way they think about
modeling problems. Note that I used the word "engage", meaning the tool
doesn't to be an exact reflection of their process for creating a models
and doing analysis, but if has to be close enough to make them want to
"step up to the plate" and "take a swing for the fence" as it were.
In many ways too, Fernando's comment touch on the the problem I have
always had with Kepler. The UI is completely intertwined with the model
definition and the analysis specification. It has nearly zero
flexibility in how one "views" the "process" of entering in the model.
(As a side note, the UI is one of the harder aspects of Kepler to tailor)
In a perfect world of time and budgets it would be nice to create a tool
that has standalone Modeling and Analysis Definition Language, then a
core standalone analysis/simulation engine, and lastly a set of GUI
tools that assist the scientists in creating the models and monitoring
the execution. Notice how the GUI came last? The GUI needs to be born
out of the underlying technology instead of defining it.
I am a realist and I understand how much functionality Kepler brings to
the table, it gives us such a head start in AMS. Maybe we need to start
thinking about a more "conceptual" tool that fits in front of Kelper,
but before that we need to really understand how the average scientist
would approach the SEEK technology. I'll say this as a joke: "but that
pretty much excludes any scientist working on SEEK," but it is true.
Never let the folks creating the technology tell you how the technology
should be used, that's the responsibility of the user.
I know the word "use case" has been thrown around daily as if it were
confetti, but I think the time is approaching where we need to really
focus on developing some "real" end-user use cases. I think a much
bigger effort and emphasis needs to be placed on the "top-down." And
some of the ideas presented in this entire thread is a good start.
Rod
Deana Pennington wrote:
> In thinking about the Kepler UI, it has occurred to me that it would
> really be nice if the ontologies that we construct to organize the
> actors into categories, could also be used in a high-level workflow
> design phase. For example, in the niche modeling workflow, GARP,
> neural networks, GRASP and many other algorithms could be used for
> that one step in the workflow. Those algorithms would all be
> organized under some high-level hierarchy ("StatisticalModels").
> Another example is the Pre-sample step, where we are using the GARP
> pre-sample algorithm, but other sampling algorithms could be
> substituted. There should be a high-level "Sampling" concept, under
> which different sampling algorithms would be organized. During the
> design phase, the user could construct a workflow based on these high
> level concepts (Sampling and StatisticalModel), then bind an actor
> (already implemented or using Chad's new actor) in a particular view
> of that workflow. So, a workflow would be designed at a high
> conceptual level, and have multiple views, binding different
> algorithms, and those different views would be logically linked
> through the high level workflow. The immediate case is the GARP
> workflow we are designing will need another version for the neural
> network algorithm, and that version will be virtually an exact
> replicate except for that actor. Seems like it would be better to
> have one workflow with different views...
>
> I hope the above is coherent...in reading it, I'm not sure that it is
> :-)
>
> Deana
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mercury.nceas.ucsb.edu/ecoinformatics/pipermail/seek-kr-sms/attachments/20040611/d96552f6/attachment.htm
More information about the Seek-kr-sms
mailing list