From Serguei.Krivov at uvm.edu Tue Jun 1 11:48:07 2004 From: Serguei.Krivov at uvm.edu (Serguei Krivov) Date: Tue, 1 Jun 2004 14:48:07 -0400 Subject: [seek-kr-sms] elsevier's journal of web semantics Message-ID: <000001c44808$fad1d3d0$3ca6c684@BTS2K3000D5635086B> In case you have not seen it already, here is a highly relevant link: http://www.websemanticsjournal.org/ It is possible to publish ontologies , demo software, and also classical papers on grid/semantic web technology. serguei ------------------------------------------------------------------------ ------------ Serguei Krivov, Assist. Research Professor, Computer Science Dept. & Gund Inst. for Ecological Economics, University of Vermont; 590 Main St. Burlington VT 05405 phone: (802)-656-2978 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mercury.nceas.ucsb.edu/ecoinformatics/pipermail/seek-kr-sms/attachments/20040601/eed1943f/attachment.htm From ludaesch at sdsc.edu Tue Jun 1 23:16:08 2004 From: ludaesch at sdsc.edu (Bertram Ludaescher) Date: Tue, 1 Jun 2004 23:16:08 -0700 (PDT) Subject: [seek-kr-sms] elsevier's journal of web semantics In-Reply-To: <000001c44808$fad1d3d0$3ca6c684@BTS2K3000D5635086B> References: <000001c44808$fad1d3d0$3ca6c684@BTS2K3000D5635086B> Message-ID: <16573.28792.489823.40019@multivac.sdsc.edu> Yes, I also encourage you to submit work there! Bertram >>>>> "SK" == Serguei Krivov writes: SK> SK> In case you have not seen it already, here is a highly relevant link: SK> http://www.websemanticsjournal.org/ SK> It is possible to publish ontologies , demo software, and also SK> classical papers on grid/semantic web technology. SK> SK> serguei SK> ------------------------------------------------------------------------ SK> ------------ SK> Serguei Krivov, Assist. Research Professor, SK> Computer Science Dept. & Gund Inst. for Ecological Economics, SK> University of Vermont; 590 Main St. Burlington VT 05405 SK> phone: (802)-656-2978 SK> SK> SK> SK> SK> SK> SK> SK> SK> SK> SK> SK> name="Street"/> SK> name="PostalCode"/> SK> name="State"/> SK> name="City"/> SK> name="address"/> SK> name="PlaceType"/> SK> name="PlaceName"/> SK> name="place"/> SK> name="PersonName"/> SK> SK> SK> SK> SK> SK> SK> SK>
SK> SK>

In case you have not seen it already, here is a highly SK> relevant link:   href="http://www.websemanticsjournal.org/">http://www.websemanticsjournal.org/

SK> SK>

It is possible to publish style='mso-spacerun:yes'> ontologies SK> , demo software,  and style='mso-spacerun:yes'> also classical papers on grid/semantic SK> web technology.

SK> SK>

 

SK> SK>

style='font-size:10.0pt;font-family:Arial'>serguei size=2 face=Arial>

SK> SK>

------------------------------------------------------------------------------------ style='mso-no-proof:yes'>

SK> SK>

face="Times New Roman">Serguei SK> Krivov, Assist. Research Professor,

SK> SK>

Computer Science Dept. & Gund Inst. for Ecological SK> Economics,

SK> SK>

size=3 face="Times New Roman">University style='mso-no-proof:yes'> of style='mso-no-proof:yes'>Vermont style='mso-no-proof:yes'>; style='mso-no-proof:yes'>590 Main St. style='mso-no-proof:yes'> Burlington style='mso-no-proof:yes'> VT style='mso-no-proof:yes'> 05405 style='mso-no-proof:yes'>

SK> SK>

phone: (802)-656-2978

SK> SK>

 

SK> SK>
SK> SK> SK> SK> From franz at nceas.ucsb.edu Wed Jun 2 09:56:50 2004 From: franz at nceas.ucsb.edu (Nico M. Franz) Date: Wed, 02 Jun 2004 09:56:50 -0700 Subject: [seek-kr-sms] Re: Thoughts on GUIDs - ontologies In-Reply-To: <40B3A0BC.8060604@sdsc.edu> References: <48FBDA40E5530C40BDFADFC767C0C33903932A53@skylark2.home.ku.edu> <48FBDA40E5530C40BDFADFC767C0C33903932A53@skylark2.home.ku.edu> Message-ID: <5.2.1.1.2.20040602083819.017c8460@hyperion.nceas.ucsb.edu> Hi Shawn: that's a nice little paper for an outsider to look at. I'm writing this mainly to give the "ontologists" in SEEK a bit of an idea about biological systematics. It's tempting to think about how taxonomic concepts ought to look like so that computers can understand a lot about them. Though I must say I started stumbling already on page 1, when the authors started talking about more, or less rigid essences. I suppose you can look at most comprehensive and internally consistent biological classifications as ontologies. In the Linnean system, these tend to be hierarchical (ranks), and higher taxa subsume lower ones. A genus includes (necessarily) at least one species. The hierarchy is based on perceived similarities and differences in traits. These can range from DNA base pairs to bird songs. So far so good. Systematists could observe all kinds of similarities, e.g. whether an ant was collected by Shawn Bowers on the roof on the SDSC (check: yes), or not (check: no). What they strive to observe specifically to figure out the natural (evolutionary, phylogenetic) relationships among organisms, is called special similarity, or homology. Homology is similarity due to common ancestry. It's usually (i.e. in systematics) applied to traits fixed among species and at higher levels, not among populations whose trait frequencies still vary due to continuous interbreeding. Example of a good homology: the wings of bats and birds are homologous AS tetrapod (these are land vertebrates) fore arms. Now, it seems to me that the essence/rigidity notions are (not yet?) sophisticated enough to describe what systematists do. First of all, homologies are inherently non-rigid. Nearly all traits vary somewhat among individuals pertaining to a species. They'll quite often vary considerably among species pertaining to a genus that is nevertheless said to have a particular trait. This is a feature of traits like wings or feathers being brought forth by thousands of DNA bases and being controlled by multiple genes, all of which are potentially subject to modification. So the "essential non-rigidity" (pun intended) goes all the way down. We have lots of reasons to believe it's real. At a larger time scale, we can talk of birds being dinosaurs and the birds' feathers being homologous to the dinosaurs' reptile-like scales. Meaning that there's a historical, evolutionary identity that one might reasonably propose, in spite of the fact that there has been so much transformation that a lot of the DNA responsible for producing bird feathers no longer looks like that which once brought forth dinosaur scales. So the "is homologous to" can be a fairly sophisticated way of saying "is a". And scientists can have alternative, almost equally plausible solutions to this, backed up by varying amounts of relevant yet conflicting observations. Which leads me to the second point. Even though the properties of a taxon, in the evolutionary (homologous) sense, are in some way necessary, they're not necessarily obvious at the moment that taxon is first named and classified. That can be the case regardless of whether one gets the classification right or wrong. So one could correctly recognize and name what ultimately turns out to be a valid genus of ants, yet get most of the features that make it a natural evolutionary entity wrong. Or just not mention them in a 1758 Latin publication. In modern days, many systematists probably consider the naming/classifying business as a final, almost trivial step. The discussions really revolve around what's the right/best kind of evidence that a taxon is natural or not. For example: flies have a set of highly modified hind wings (called halteres) and so do the fore wings (!) of an enigmatic lineage of parasitic insects (Strepsiptera in Italian). Whether these are homologous or not depends on whether flies and these parasites are one or two independent lineages. Lots of genes are being sequenced to figure this out, and the morphology is being reanalyzed too. There's still no convincing solution. People also try to see whether developmentally it's possible to have a mutation that produces halteres on another body segment (fore to hind wings). Etc. Once it's all said and done, those parasites may truly have "halteres" (not just things that look like them), and also be classified in a new way. So essences can be necessary (must be there always) and contingent (we won't see them until very late) at the same time. I wrote this partly because so far I've had this hunch that CS/AI ontologists (what's a proper name? OntoClean sounds like a mouthwash, I'm sure it'll be synonymized soon) draw most of their examples from classification where humans STIPULATE properties. In systematics, *we* slowly try to work them out. Essences in taxonomy are ontologically soft (due to evolution), and in any case hard to figure out. Yet they're the backbone for building the hierarchies and naming taxa. I don't think that's impossible to incorporate in ontologies, though *I* haven't seen it yet. Has it been done? One way might be to represent properties probabilistically. "Is a, with a chance of 80%, according to study X of person Y." Maybe this makes no sense (yet)... BTW, various of the other topics dealt with in the paper come up in systematics too. And philosophers of science have wondered for 50 years whether species are "classes or individuals". The really seem to be both. Cheers, Nico At 12:38 PM 5/25/2004 -0700, Shawn Bowers wrote: >Beach, James H wrote: snip >>If the key itself has information then you will inevitably run into a >>situation where the key will need to be changed because something about >>the information represented by the key value has changed or is in doubt >>or is a matter of interpretation, (thus losing the temporal uniqueness of >>the GUID). > >Again, then the information used as the key isn't really "identifying" >information, and you have a problem anyway. > >There is a very interesting article that people may want to read >concerning properties of things and classification, including identity and >unity, that may be relevant to what taxon is trying to accomplish with >concepts. > >The paper can be found here, and was published in the Communications of >the ACM in 2002. There are longer, more detailed versions available, but >this is a good primer. > >http://www.loa-cnr.it/Papers/CACM2002.pdf > >snip 2 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mercury.nceas.ucsb.edu/ecoinformatics/pipermail/seek-kr-sms/attachments/20040602/0608e8fb/attachment.htm From dpennington at lternet.edu Tue Jun 8 14:04:24 2004 From: dpennington at lternet.edu (Deana Pennington) Date: Tue, 08 Jun 2004 15:04:24 -0600 Subject: [seek-kr-sms] UI Message-ID: <40C629D8.1080107@lternet.edu> In thinking about the Kepler UI, it has occurred to me that it would really be nice if the ontologies that we construct to organize the actors into categories, could also be used in a high-level workflow design phase. For example, in the niche modeling workflow, GARP, neural networks, GRASP and many other algorithms could be used for that one step in the workflow. Those algorithms would all be organized under some high-level hierarchy ("StatisticalModels"). Another example is the Pre-sample step, where we are using the GARP pre-sample algorithm, but other sampling algorithms could be substituted. There should be a high-level "Sampling" concept, under which different sampling algorithms would be organized. During the design phase, the user could construct a workflow based on these high level concepts (Sampling and StatisticalModel), then bind an actor (already implemented or using Chad's new actor) in a particular view of that workflow. So, a workflow would be designed at a high conceptual level, and have multiple views, binding different algorithms, and those different views would be logically linked through the high level workflow. The immediate case is the GARP workflow we are designing will need another version for the neural network algorithm, and that version will be virtually an exact replicate except for that actor. Seems like it would be better to have one workflow with different views... I hope the above is coherent...in reading it, I'm not sure that it is :-) Deana -- ******** Deana D. Pennington, PhD Long-term Ecological Research Network Office UNM Biology Department MSC03 2020 1 University of New Mexico Albuquerque, NM 87131-0001 505-272-7288 (office) 505 272-7080 (fax) From ferdinando.villa at uvm.edu Tue Jun 8 15:44:15 2004 From: ferdinando.villa at uvm.edu (Ferdinando Villa) Date: Tue, 08 Jun 2004 18:44:15 -0400 Subject: [seek-kr-sms] UI In-Reply-To: <40C629D8.1080107@lternet.edu> References: <40C629D8.1080107@lternet.edu> Message-ID: <1086734655.4160.16.camel@basil.snr.uvm.edu> Hi Deana, I've started thinking along these lines some time ago, on the grounds that modeling the high-level logical structure (rather than the workflow with all its inputs, outputs and loops) may be all our typical user is willing to do. Obviously I'm biased by interacting with my own user community, but they're probably representative of the wider SEEK user community. So I fully agree with you here. However, I don't think that we can achieve such an high-level paradigm simply by augmenting the actors specifications. For the IMA I've done a pretty thorough analysis of the relationship between the logical structure of a model/pipeline/concept and the workflow that calculates the states of the final "concept" you're after; as a result of that, I'm pretty convinced that they don't relate that simply. In Edinburgh (while not listening to the MyGrid presentation) I wrote down a rough explanation of what I think in this regard (and what I think that my work can contribute to SEEK and Kepler), and circulated to a small group for initial feedback. I attach the document, which needs some patience on your part. If you can bear with some dense writing with an Italian accent, I think you'll find similarities with what you propose, and I'd love to hear what you think. Cheers, ferdinando On Tue, 2004-06-08 at 17:04, Deana Pennington wrote: > In thinking about the Kepler UI, it has occurred to me that it would > really be nice if the ontologies that we construct to organize the > actors into categories, could also be used in a high-level workflow > design phase. For example, in the niche modeling workflow, GARP, neural > networks, GRASP and many other algorithms could be used for that one > step in the workflow. Those algorithms would all be organized under > some high-level hierarchy ("StatisticalModels"). Another example is the > Pre-sample step, where we are using the GARP pre-sample algorithm, but > other sampling algorithms could be substituted. There should be a > high-level "Sampling" concept, under which different sampling algorithms > would be organized. During the design phase, the user could construct a > workflow based on these high level concepts (Sampling and > StatisticalModel), then bind an actor (already implemented or using > Chad's new actor) in a particular view of that workflow. So, a > workflow would be designed at a high conceptual level, and have multiple > views, binding different algorithms, and those different views would be > logically linked through the high level workflow. The immediate case is > the GARP workflow we are designing will need another version for the > neural network algorithm, and that version will be virtually an exact > replicate except for that actor. Seems like it would be better to have > one workflow with different views... > > I hope the above is coherent...in reading it, I'm not sure that it is :-) > > Deana > -- -------------- next part -------------- A non-text attachment was scrubbed... Name: SEEK_concepts.doc Type: application/msword Size: 43520 bytes Desc: not available Url : http://mercury.nceas.ucsb.edu/ecoinformatics/pipermail/seek-kr-sms/attachments/20040608/0253ece7/SEEK_concepts.doc From Serguei.Krivov at uvm.edu Thu Jun 10 07:42:54 2004 From: Serguei.Krivov at uvm.edu (Serguei Krivov) Date: Thu, 10 Jun 2004 10:42:54 -0400 Subject: [seek-kr-sms] growl: owl-dl or owl full?! Message-ID: <000001c44ef9$36a283e0$3ca6c684@BTS2K3000D5635086B> Hi All, I am working on growl editing and have an urgent design issue: Should we impose the editing discipline which would allow owl dl constructs only and do nothing when user tries to make an owl full construct? Some editors like oil-edit (that works with owl now) are intolerant to owl full. Personally I think that this is right since owl-full ontologies are difficult to use. OWLAPI seems also not really happy to see owl-full constructs, it reports error, however somehow it processes them. Ideally one can have a trigger which switch owl-dl discipline on and off. But implementing such trigger would increase the editing code may be 1.6 times comparing to making plain owl-dl discipline. I would leave this for the future, but you guys may have other suggestions (?) Please let me know what you think. serguei -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mercury.nceas.ucsb.edu/ecoinformatics/pipermail/seek-kr-sms/attachments/20040610/cfe75d82/attachment.htm From ferdinando.villa at uvm.edu Thu Jun 10 08:21:50 2004 From: ferdinando.villa at uvm.edu (Ferdinando Villa) Date: Thu, 10 Jun 2004 11:21:50 -0400 Subject: [seek-kr-sms] growl: owl-dl or owl full?! In-Reply-To: <000001c44ef9$36a283e0$3ca6c684@BTS2K3000D5635086B> References: <000001c44ef9$36a283e0$3ca6c684@BTS2K3000D5635086B> Message-ID: <1086880910.4449.4.camel@basil.snr.uvm.edu> My suggestion is to consider what the realm of GrOWL is, i.e. making complex things intuitive, and adopting the framework that has the best chance of defining the product within that realm. So I'm in favor of having ONE operation mode - thus avoiding triggers (to avoid code explosion and more "interface opacity") and I don't think we should worry about owl-full constructs - UNLESS our KR gurus tell us that they're an absolute necessity for the purposes of SEEK. cheers, ferdinando On Thu, 2004-06-10 at 10:42, Serguei Krivov wrote: > Hi All, > > I am working on growl editing and have an urgent design issue: > > Should we impose the editing discipline which would allow owl dl > constructs only and do nothing when user tries to make an owl full > construct? Some editors like oil-edit (that works with owl now) are > intolerant to owl full. Personally I think that this is right since > owl-full ontologies are difficult to use. OWLAPI seems also not really > happy to see owl-full constructs, it reports error, however somehow it > processes them. > > > > Ideally one can have a trigger which switch owl-dl discipline on and > off. But implementing such trigger would increase the editing code may > be 1.6 times comparing to making plain owl-dl discipline. I would > leave this for the future, but you guys may have other suggestions (?) > > Please let me know what you think. > > serguei > > -- From eal at eecs.berkeley.edu Thu Jun 10 08:20:35 2004 From: eal at eecs.berkeley.edu (Edward A Lee) Date: Thu, 10 Jun 2004 08:20:35 -0700 Subject: [kepler-dev] Re: [seek-kr-sms] UI In-Reply-To: <1086734655.4160.16.camel@basil.snr.uvm.edu> References: <40C629D8.1080107@lternet.edu> <40C629D8.1080107@lternet.edu> Message-ID: <5.1.0.14.2.20040610081445.00bc5f18@mho.eecs.berkeley.edu> Apologies for my ignorance, but what is "IMA"? On a quick glance (hindered by lack of comprehension of TLA's), these ideas strike me as related to some work we've been doing on meta-modeling together with Vanderbilt University... The notion of meta-modeling is that one constructs models of families of models by specifying constraints on their static structure... Vanderbilt has a tool called GME (generic modeling environment) where a user specifies a meta model for a domain-specific modeling technique, and then GME synthesizes a customized visual editor that enforces those constraints. Conceivably we could build something similar in Kepler, where instead of building workflows, one constructs a meta model of a family of workflows... ? Just some random neuron firing triggered by Ferdinando's thoughts... Edward At 06:44 PM 6/8/2004 -0400, Ferdinando Villa wrote: >Hi Deana, > >I've started thinking along these lines some time ago, on the grounds >that modeling the high-level logical structure (rather than the workflow >with all its inputs, outputs and loops) may be all our typical user is >willing to do. Obviously I'm biased by interacting with my own user >community, but they're probably representative of the wider SEEK user >community. So I fully agree with you here. > >However, I don't think that we can achieve such an high-level paradigm >simply by augmenting the actors specifications. For the IMA I've done a >pretty thorough analysis of the relationship between the logical >structure of a model/pipeline/concept and the workflow that calculates >the states of the final "concept" you're after; as a result of that, I'm >pretty convinced that they don't relate that simply. In Edinburgh (while >not listening to the MyGrid presentation) I wrote down a rough >explanation of what I think in this regard (and what I think that my >work can contribute to SEEK and Kepler), and circulated to a small group >for initial feedback. I attach the document, which needs some patience >on your part. If you can bear with some dense writing with an Italian >accent, I think you'll find similarities with what you propose, and I'd >love to hear what you think. > >Cheers, >ferdinando > >On Tue, 2004-06-08 at 17:04, Deana Pennington wrote: > > In thinking about the Kepler UI, it has occurred to me that it would > > really be nice if the ontologies that we construct to organize the > > actors into categories, could also be used in a high-level workflow > > design phase. For example, in the niche modeling workflow, GARP, neural > > networks, GRASP and many other algorithms could be used for that one > > step in the workflow. Those algorithms would all be organized under > > some high-level hierarchy ("StatisticalModels"). Another example is the > > Pre-sample step, where we are using the GARP pre-sample algorithm, but > > other sampling algorithms could be substituted. There should be a > > high-level "Sampling" concept, under which different sampling algorithms > > would be organized. During the design phase, the user could construct a > > workflow based on these high level concepts (Sampling and > > StatisticalModel), then bind an actor (already implemented or using > > Chad's new actor) in a particular view of that workflow. So, a > > workflow would be designed at a high conceptual level, and have multiple > > views, binding different algorithms, and those different views would be > > logically linked through the high level workflow. The immediate case is > > the GARP workflow we are designing will need another version for the > > neural network algorithm, and that version will be virtually an exact > > replicate except for that actor. Seems like it would be better to have > > one workflow with different views... > > > > I hope the above is coherent...in reading it, I'm not sure that it is :-) > > > > Deana > > >-- ------------ Edward A. Lee, Professor 518 Cory Hall, UC Berkeley, Berkeley, CA 94720 phone: 510-642-0455, fax: 510-642-2739 eal at eecs.Berkeley.EDU, http://ptolemy.eecs.berkeley.edu/~eal From ferdinando.villa at uvm.edu Thu Jun 10 09:57:55 2004 From: ferdinando.villa at uvm.edu (Ferdinando Villa) Date: Thu, 10 Jun 2004 12:57:55 -0400 Subject: [kepler-dev] Re: [seek-kr-sms] UI In-Reply-To: <5.1.0.14.2.20040610081445.00bc5f18@mho.eecs.berkeley.edu> References: <40C629D8.1080107@lternet.edu> <40C629D8.1080107@lternet.edu> <5.1.0.14.2.20040610081445.00bc5f18@mho.eecs.berkeley.edu> Message-ID: <1086886674.4449.37.camel@basil.snr.uvm.edu> IMA is Integrating Modelling Architecture (http://www.integratedmodelling.org, also see attached draft paper to be published in JIIS soon). Apologies for not spelling it out. By the way - what is TLA? :-) Anyway - sounds like GME is close to what we're talking about. As I was discussing earlier with Deana, what I've been after with the IMA system is precisely that kind of conceptually driven modeling - create a model/pipeline as an instance of a concept according to a specified ontology (e.g. food web, nitrogen cycle, shannon index), define all instances from EITHER data or models (by querying a DB or providing them), and the system turns it into the declarative specification of a workflow that can be later optimized and turned into MOML, C, Perl, whatever suits you, or "interpreted" to calculate the states. The workflow contains all loops, transformations, unit conversion etc. In SEEK/Kepler terms that could mean having a "semantic director" that creates the MOML for the actual workflow - you could have a tabbed window in Kepler and access the workflow if necessary, but you should be able to model at the conceptual level. I think that's what SEEK is ultimately after, except it's a notion we're just starting to bounce off each other. Personally I feel that the current "workflow with semantics and transformations" notion is half way between what we're talking about and the notion of the "engineering-like" workflow, which is probably easier to abandon for us ecologists than for computer scientists. I also think that the major impediment to an understanding that requires a paradigm switch is the early idealization of a graphical user interface - you think of that picture, and you can only conceptualize that kind of workflow, and that limits you. Of course that's only my opinion and there's no reason why SEEK can't work out with the current approach - I just feel that this is an eventual end point, but may be wrong. Sounds like GME is something we should look at. Any pointers/docs/software/thoughts? Ciao ferdinando On Thu, 2004-06-10 at 11:20, Edward A Lee wrote: > Apologies for my ignorance, but what is "IMA"? > > On a quick glance (hindered by lack of comprehension of TLA's), > these ideas strike me as related to some work we've been doing > on meta-modeling together with Vanderbilt University... The notion > of meta-modeling is that one constructs models of families of models > by specifying constraints on their static structure... Vanderbilt > has a tool called GME (generic modeling environment) where a user > specifies a meta model for a domain-specific modeling technique, > and then GME synthesizes a customized visual editor that enforces > those constraints. > > Conceivably we could build something similar in Kepler, where > instead of building workflows, one constructs a meta model of a family > of workflows... ? > > Just some random neuron firing triggered by Ferdinando's thoughts... > > Edward > > > At 06:44 PM 6/8/2004 -0400, Ferdinando Villa wrote: > >Hi Deana, > > > >I've started thinking along these lines some time ago, on the grounds > >that modeling the high-level logical structure (rather than the workflow > >with all its inputs, outputs and loops) may be all our typical user is > >willing to do. Obviously I'm biased by interacting with my own user > >community, but they're probably representative of the wider SEEK user > >community. So I fully agree with you here. > > > >However, I don't think that we can achieve such an high-level paradigm > >simply by augmenting the actors specifications. For the IMA I've done a > >pretty thorough analysis of the relationship between the logical > >structure of a model/pipeline/concept and the workflow that calculates > >the states of the final "concept" you're after; as a result of that, I'm > >pretty convinced that they don't relate that simply. In Edinburgh (while > >not listening to the MyGrid presentation) I wrote down a rough > >explanation of what I think in this regard (and what I think that my > >work can contribute to SEEK and Kepler), and circulated to a small group > >for initial feedback. I attach the document, which needs some patience > >on your part. If you can bear with some dense writing with an Italian > >accent, I think you'll find similarities with what you propose, and I'd > >love to hear what you think. > > > >Cheers, > >ferdinando > > > >On Tue, 2004-06-08 at 17:04, Deana Pennington wrote: > > > In thinking about the Kepler UI, it has occurred to me that it would > > > really be nice if the ontologies that we construct to organize the > > > actors into categories, could also be used in a high-level workflow > > > design phase. For example, in the niche modeling workflow, GARP, neural > > > networks, GRASP and many other algorithms could be used for that one > > > step in the workflow. Those algorithms would all be organized under > > > some high-level hierarchy ("StatisticalModels"). Another example is the > > > Pre-sample step, where we are using the GARP pre-sample algorithm, but > > > other sampling algorithms could be substituted. There should be a > > > high-level "Sampling" concept, under which different sampling algorithms > > > would be organized. During the design phase, the user could construct a > > > workflow based on these high level concepts (Sampling and > > > StatisticalModel), then bind an actor (already implemented or using > > > Chad's new actor) in a particular view of that workflow. So, a > > > workflow would be designed at a high conceptual level, and have multiple > > > views, binding different algorithms, and those different views would be > > > logically linked through the high level workflow. The immediate case is > > > the GARP workflow we are designing will need another version for the > > > neural network algorithm, and that version will be virtually an exact > > > replicate except for that actor. Seems like it would be better to have > > > one workflow with different views... > > > > > > I hope the above is coherent...in reading it, I'm not sure that it is :-) > > > > > > Deana > > > > >-- > > ------------ > Edward A. Lee, Professor > 518 Cory Hall, UC Berkeley, Berkeley, CA 94720 > phone: 510-642-0455, fax: 510-642-2739 > eal at eecs.Berkeley.EDU, http://ptolemy.eecs.berkeley.edu/~eal > > _______________________________________________ > kepler-dev mailing list > kepler-dev at ecoinformatics.org > http://www.ecoinformatics.org/mailman/listinfo/kepler-dev -- -------------- next part -------------- A non-text attachment was scrubbed... Name: villa_jiis.pdf Type: application/pdf Size: 648628 bytes Desc: not available Url : http://mercury.nceas.ucsb.edu/ecoinformatics/pipermail/seek-kr-sms/attachments/20040610/64088e58/villa_jiis.pdf From ferdinando.villa at uvm.edu Thu Jun 10 10:14:19 2004 From: ferdinando.villa at uvm.edu (Ferdinando Villa) Date: Thu, 10 Jun 2004 13:14:19 -0400 Subject: [kepler-dev] Re: [seek-kr-sms] UI In-Reply-To: <1086887039.40c8947f7e935@mail.lternet.edu> References: <40C629D8.1080107@lternet.edu> <40C629D8.1080107@lternet.edu> <5.1.0.14.2.20040610081445.00bc5f18@mho.eecs.berkeley.edu> <1086887039.40c8947f7e935@mail.lternet.edu> Message-ID: <1086887659.4449.46.camel@basil.snr.uvm.edu> It may be just semantic nitpicking, but I think what makes things complex are the names more than the concepts. As long as we model the Shannon index and not the process that calculates it, things are extremely simple. Instead of defining the analytical process that calculates the index, we recognize it as a concept and create an instance of it. Its definition (in the relevant ontology) guides the modeling and query process through all the required relationships. Then SMS/AMS - not the user - creates the "model". Can we envision what is the ultimate concept that the GARP process calculates? Distribution of a species? Modeling that concept (using a GARP-aware subclass of the base ecological concept) will guide the user through the retrieval of compatible data (incrementally narrowing the space/time context to search as new required data are added), then create a GARP pipeline and run it - and modeling a subclass of it that's defined in another way will create GARP version 2.... 2 more cents, of an Euro as always.... ferdinando On Thu, 2004-06-10 at 13:03, penningd at lternet.edu wrote: > I think these are all excellent ideas, that we should follow up on. These are > closely related to the whole UI issue. Ferdinando and I have talked about > trying to generate a prototype of his IMA approach using the GARP workflow. I > think we should do the same thing with GME. Or maybe, if we look at them > together, they are closely linked and could be used together. I think > "meta-model" really conveys the idea here, and that it is the level at which our > scientists are most likely to work. Generating a working model from a > meta-model seems to be the difficult step, but that's where semantically-created > transformation steps would be extremely useful. > > Deana > > > Quoting Edward A Lee : > > > > > Apologies for my ignorance, but what is "IMA"? > > > > On a quick glance (hindered by lack of comprehension of TLA's), > > these ideas strike me as related to some work we've been doing > > on meta-modeling together with Vanderbilt University... The notion > > of meta-modeling is that one constructs models of families of models > > by specifying constraints on their static structure... Vanderbilt > > has a tool called GME (generic modeling environment) where a user > > specifies a meta model for a domain-specific modeling technique, > > and then GME synthesizes a customized visual editor that enforces > > those constraints. > > > > Conceivably we could build something similar in Kepler, where > > instead of building workflows, one constructs a meta model of a family > > of workflows... ? > > > > Just some random neuron firing triggered by Ferdinando's thoughts... > > > > Edward > > > > > > At 06:44 PM 6/8/2004 -0400, Ferdinando Villa wrote: > > >Hi Deana, > > > > > >I've started thinking along these lines some time ago, on the grounds > > >that modeling the high-level logical structure (rather than the > > workflow > > >with all its inputs, outputs and loops) may be all our typical user > > is > > >willing to do. Obviously I'm biased by interacting with my own user > > >community, but they're probably representative of the wider SEEK user > > >community. So I fully agree with you here. > > > > > >However, I don't think that we can achieve such an high-level > > paradigm > > >simply by augmenting the actors specifications. For the IMA I've done > > a > > >pretty thorough analysis of the relationship between the logical > > >structure of a model/pipeline/concept and the workflow that > > calculates > > >the states of the final "concept" you're after; as a result of that, > > I'm > > >pretty convinced that they don't relate that simply. In Edinburgh > > (while > > >not listening to the MyGrid presentation) I wrote down a rough > > >explanation of what I think in this regard (and what I think that my > > >work can contribute to SEEK and Kepler), and circulated to a small > > group > > >for initial feedback. I attach the document, which needs some > > patience > > >on your part. If you can bear with some dense writing with an Italian > > >accent, I think you'll find similarities with what you propose, and > > I'd > > >love to hear what you think. > > > > > >Cheers, > > >ferdinando > > > > > >On Tue, 2004-06-08 at 17:04, Deana Pennington wrote: > > > > In thinking about the Kepler UI, it has occurred to me that it > > would > > > > really be nice if the ontologies that we construct to organize the > > > > actors into categories, could also be used in a high-level > > workflow > > > > design phase. For example, in the niche modeling workflow, GARP, > > neural > > > > networks, GRASP and many other algorithms could be used for that > > one > > > > step in the workflow. Those algorithms would all be organized > > under > > > > some high-level hierarchy ("StatisticalModels"). Another example is > > the > > > > Pre-sample step, where we are using the GARP pre-sample algorithm, > > but > > > > other sampling algorithms could be substituted. There should be a > > > > high-level "Sampling" concept, under which different sampling > > algorithms > > > > would be organized. During the design phase, the user could > > construct a > > > > workflow based on these high level concepts (Sampling and > > > > StatisticalModel), then bind an actor (already implemented or > > using > > > > Chad's new actor) in a particular view of that workflow. So, a > > > > workflow would be designed at a high conceptual level, and have > > multiple > > > > views, binding different algorithms, and those different views would > > be > > > > logically linked through the high level workflow. The immediate > > case is > > > > the GARP workflow we are designing will need another version for > > the > > > > neural network algorithm, and that version will be virtually an > > exact > > > > replicate except for that actor. Seems like it would be better to > > have > > > > one workflow with different views... > > > > > > > > I hope the above is coherent...in reading it, I'm not sure that it > > is :-) > > > > > > > > Deana > > > > > > >-- > > > > ------------ > > Edward A. Lee, Professor > > 518 Cory Hall, UC Berkeley, Berkeley, CA 94720 > > phone: 510-642-0455, fax: 510-642-2739 > > eal at eecs.Berkeley.EDU, http://ptolemy.eecs.berkeley.edu/~eal > > > > _______________________________________________ > > seek-kr-sms mailing list > > seek-kr-sms at ecoinformatics.org > > http://www.ecoinformatics.org/mailman/listinfo/seek-kr-sms > > -- From penningd at lternet.edu Thu Jun 10 10:03:59 2004 From: penningd at lternet.edu (penningd@lternet.edu) Date: Thu, 10 Jun 2004 11:03:59 -0600 (MDT) Subject: [kepler-dev] Re: [seek-kr-sms] UI In-Reply-To: <5.1.0.14.2.20040610081445.00bc5f18@mho.eecs.berkeley.edu> References: <40C629D8.1080107@lternet.edu> <40C629D8.1080107@lternet.edu> <5.1.0.14.2.20040610081445.00bc5f18@mho.eecs.berkeley.edu> Message-ID: <1086887039.40c8947f7e935@mail.lternet.edu> I think these are all excellent ideas, that we should follow up on. These are closely related to the whole UI issue. Ferdinando and I have talked about trying to generate a prototype of his IMA approach using the GARP workflow. I think we should do the same thing with GME. Or maybe, if we look at them together, they are closely linked and could be used together. I think "meta-model" really conveys the idea here, and that it is the level at which our scientists are most likely to work. Generating a working model from a meta-model seems to be the difficult step, but that's where semantically-created transformation steps would be extremely useful. Deana Quoting Edward A Lee : > > Apologies for my ignorance, but what is "IMA"? > > On a quick glance (hindered by lack of comprehension of TLA's), > these ideas strike me as related to some work we've been doing > on meta-modeling together with Vanderbilt University... The notion > of meta-modeling is that one constructs models of families of models > by specifying constraints on their static structure... Vanderbilt > has a tool called GME (generic modeling environment) where a user > specifies a meta model for a domain-specific modeling technique, > and then GME synthesizes a customized visual editor that enforces > those constraints. > > Conceivably we could build something similar in Kepler, where > instead of building workflows, one constructs a meta model of a family > of workflows... ? > > Just some random neuron firing triggered by Ferdinando's thoughts... > > Edward > > > At 06:44 PM 6/8/2004 -0400, Ferdinando Villa wrote: > >Hi Deana, > > > >I've started thinking along these lines some time ago, on the grounds > >that modeling the high-level logical structure (rather than the > workflow > >with all its inputs, outputs and loops) may be all our typical user > is > >willing to do. Obviously I'm biased by interacting with my own user > >community, but they're probably representative of the wider SEEK user > >community. So I fully agree with you here. > > > >However, I don't think that we can achieve such an high-level > paradigm > >simply by augmenting the actors specifications. For the IMA I've done > a > >pretty thorough analysis of the relationship between the logical > >structure of a model/pipeline/concept and the workflow that > calculates > >the states of the final "concept" you're after; as a result of that, > I'm > >pretty convinced that they don't relate that simply. In Edinburgh > (while > >not listening to the MyGrid presentation) I wrote down a rough > >explanation of what I think in this regard (and what I think that my > >work can contribute to SEEK and Kepler), and circulated to a small > group > >for initial feedback. I attach the document, which needs some > patience > >on your part. If you can bear with some dense writing with an Italian > >accent, I think you'll find similarities with what you propose, and > I'd > >love to hear what you think. > > > >Cheers, > >ferdinando > > > >On Tue, 2004-06-08 at 17:04, Deana Pennington wrote: > > > In thinking about the Kepler UI, it has occurred to me that it > would > > > really be nice if the ontologies that we construct to organize the > > > actors into categories, could also be used in a high-level > workflow > > > design phase. For example, in the niche modeling workflow, GARP, > neural > > > networks, GRASP and many other algorithms could be used for that > one > > > step in the workflow. Those algorithms would all be organized > under > > > some high-level hierarchy ("StatisticalModels"). Another example is > the > > > Pre-sample step, where we are using the GARP pre-sample algorithm, > but > > > other sampling algorithms could be substituted. There should be a > > > high-level "Sampling" concept, under which different sampling > algorithms > > > would be organized. During the design phase, the user could > construct a > > > workflow based on these high level concepts (Sampling and > > > StatisticalModel), then bind an actor (already implemented or > using > > > Chad's new actor) in a particular view of that workflow. So, a > > > workflow would be designed at a high conceptual level, and have > multiple > > > views, binding different algorithms, and those different views would > be > > > logically linked through the high level workflow. The immediate > case is > > > the GARP workflow we are designing will need another version for > the > > > neural network algorithm, and that version will be virtually an > exact > > > replicate except for that actor. Seems like it would be better to > have > > > one workflow with different views... > > > > > > I hope the above is coherent...in reading it, I'm not sure that it > is :-) > > > > > > Deana > > > > >-- > > ------------ > Edward A. Lee, Professor > 518 Cory Hall, UC Berkeley, Berkeley, CA 94720 > phone: 510-642-0455, fax: 510-642-2739 > eal at eecs.Berkeley.EDU, http://ptolemy.eecs.berkeley.edu/~eal > > _______________________________________________ > seek-kr-sms mailing list > seek-kr-sms at ecoinformatics.org > http://www.ecoinformatics.org/mailman/listinfo/seek-kr-sms > From bowers at sdsc.edu Thu Jun 10 10:17:39 2004 From: bowers at sdsc.edu (Shawn Bowers) Date: Thu, 10 Jun 2004 10:17:39 -0700 Subject: [seek-kr-sms] growl: owl-dl or owl full?! In-Reply-To: <000001c44ef9$36a283e0$3ca6c684@BTS2K3000D5635086B> References: <000001c44ef9$36a283e0$3ca6c684@BTS2K3000D5635086B> Message-ID: <40C897B3.3050001@sdsc.edu> Hi Serguei, If you look at the Protege data model, they have a language that offers similar meta-modeling constructs as found in OWL-Full. In my opinion, the use of these constructs, unless you really know what you are doing, can be confusing and often leads to incomprehensible conceptual models. My general opinion is to not support similar constructs in GrOWL. But, it isn't clear to me at this point who the target user is of the GrOWL onto editing and management tools. If it is scientists and other domain experts, I think most of the OWL-DL and even OWL-Lite constructs will be too much. For these users, I think we need to be very clear about what modeling constructs we want to support (e.g., these constructs may be at a "higher" level than OWL-DL constructs), explicitly support the needed constructs through visual notations (not OWL formulas); then figure out how those constructs are realized by OWL-Lite or OWL-DL. Since GrOWL seems to be on track to output OWL ontologies, these can be further edited by a knowledge "engineer" if needed (to add more constraints). However, if the target user group is knowledge engineers, e.g., Rich and the KR group, doesn't Protege already offer the necessary interface? In general, the family of OWL standards are complex, with many modeling constructs, and verbose, not only because OWL is stored via XML, but also because it is based on RDF. I think there is a definate need for ontology tools that do more than just expose OWL or any other DL -- like XML, OWL is much better suited as a storage and exchange language, not as an interface in and of itself for users. So, my overall suggestion, would be to figure out the necessary constructs for the target user group (which I'd be happy to help with), figure out how best to present these to the user (again, I'd be happy to help with this), then figure out if it is representable in OWL-Lite, OWL-DL (most likely), or OWL-Full (not likely). shawn Serguei Krivov wrote: > Hi All, > > I am working on growl editing and have an urgent design issue: > > Should we impose the editing discipline which would allow owl dl > constructs only and do nothing when user tries to make an owl full > construct? Some editors like oil-edit (that works with owl now) are > intolerant to owl full. Personally I think that this is right since > owl-full ontologies are difficult to use. OWLAPI seems also not really > happy to see owl-full constructs, it reports error, however somehow it > processes them. > > > > Ideally one can have a trigger which switch owl-dl discipline on and > off. But implementing such trigger would increase the editing code may > be 1.6 times comparing to making plain owl-dl discipline. I would leave > this for the future, but you guys may have other suggestions (?) > > Please let me know what you think. > > serguei > > > From bowers at sdsc.edu Thu Jun 10 10:51:14 2004 From: bowers at sdsc.edu (Shawn Bowers) Date: Thu, 10 Jun 2004 10:51:14 -0700 Subject: [kepler-dev] Re: [seek-kr-sms] UI In-Reply-To: <1086887659.4449.46.camel@basil.snr.uvm.edu> References: <40C629D8.1080107@lternet.edu> <40C629D8.1080107@lternet.edu> <5.1.0.14.2.20040610081445.00bc5f18@mho.eecs.berkeley.edu> <1086887039.40c8947f7e935@mail.lternet.edu> <1086887659.4449.46.camel@basil.snr.uvm.edu> Message-ID: <40C89F92.9080404@sdsc.edu> From my understanding of the goals of SEEK, I think what you describe below Ferdinando is very much the ultimate goal. Although it has never been formally stated, I think we are working "bottom up" to achieve the goal; building the necessary infrastructure so operations as you envision below can be realized. I believe that a driving factor for developing the system bottom up is because we are charged with "enlivening" legacy data and services. To do what you suggest, we need ontologies, we need ontology interfaces and tools (so users can easily pose such questions, and generally work at the ontology level), we need datasets, we need datasets annotated with (or accessible via) ontologies (for retrieving, accessing, and combining the relevant data), we need workflows/analysis steps (that can be executed), and annotations for these as well (so they too can be retrieved and appropriated executed). [Note that in SEEK, we haven't yet proposed to model the calcuation of, e.g., a Shannon index, we have only proposed annotating such processes with instances from an ontology (e.g., that the process computes a biodiversity index), and semantically describing the types of input and output the process consumes and produces (i.e., that it takes proportional abundances, etc., the description acting much like a database schema annotated with ontology defs).] Just as a note, what you suggest is also very much related to planning problems in AI (which obviously have some practical limitations). A benefit of pursuing this bottom-up strategy in SEEK is that we may make things easier for scientists and researchers, even if we cannot achieve (e.g., either computationally or in a reasonable amount of time) the scenario you describe below. shawn Ferdinando Villa wrote: > It may be just semantic nitpicking, but I think what makes things > complex are the names more than the concepts. As long as we model the > Shannon index and not the process that calculates it, things are > extremely simple. Instead of defining the analytical process that > calculates the index, we recognize it as a concept and create an > instance of it. Its definition (in the relevant ontology) guides the > modeling and query process through all the required relationships. Then > SMS/AMS - not the user - creates the "model". Can we envision what is > the ultimate concept that the GARP process calculates? Distribution of a > species? Modeling that concept (using a GARP-aware subclass of the base > ecological concept) will guide the user through the retrieval of > compatible data (incrementally narrowing the space/time context to > search as new required data are added), then create a GARP pipeline and > run it - and modeling a subclass of it that's defined in another way > will create GARP version 2.... > > 2 more cents, of an Euro as always.... > ferdinando > > On Thu, 2004-06-10 at 13:03, penningd at lternet.edu wrote: > >>I think these are all excellent ideas, that we should follow up on. These are >>closely related to the whole UI issue. Ferdinando and I have talked about >>trying to generate a prototype of his IMA approach using the GARP workflow. I >>think we should do the same thing with GME. Or maybe, if we look at them >>together, they are closely linked and could be used together. I think >>"meta-model" really conveys the idea here, and that it is the level at which our >>scientists are most likely to work. Generating a working model from a >>meta-model seems to be the difficult step, but that's where semantically-created >>transformation steps would be extremely useful. >> >>Deana >> >> >>Quoting Edward A Lee : >> >> >>>Apologies for my ignorance, but what is "IMA"? >>> >>>On a quick glance (hindered by lack of comprehension of TLA's), >>>these ideas strike me as related to some work we've been doing >>>on meta-modeling together with Vanderbilt University... The notion >>>of meta-modeling is that one constructs models of families of models >>>by specifying constraints on their static structure... Vanderbilt >>>has a tool called GME (generic modeling environment) where a user >>>specifies a meta model for a domain-specific modeling technique, >>>and then GME synthesizes a customized visual editor that enforces >>>those constraints. >>> >>>Conceivably we could build something similar in Kepler, where >>>instead of building workflows, one constructs a meta model of a family >>>of workflows... ? >>> >>>Just some random neuron firing triggered by Ferdinando's thoughts... >>> >>>Edward >>> >>> >>>At 06:44 PM 6/8/2004 -0400, Ferdinando Villa wrote: >>> >>>>Hi Deana, >>>> >>>>I've started thinking along these lines some time ago, on the grounds >>>>that modeling the high-level logical structure (rather than the >>> >>>workflow >>> >>>>with all its inputs, outputs and loops) may be all our typical user >>> >>>is >>> >>>>willing to do. Obviously I'm biased by interacting with my own user >>>>community, but they're probably representative of the wider SEEK user >>>>community. So I fully agree with you here. >>>> >>>>However, I don't think that we can achieve such an high-level >>> >>>paradigm >>> >>>>simply by augmenting the actors specifications. For the IMA I've done >>> >>>a >>> >>>>pretty thorough analysis of the relationship between the logical >>>>structure of a model/pipeline/concept and the workflow that >>> >>>calculates >>> >>>>the states of the final "concept" you're after; as a result of that, >>> >>>I'm >>> >>>>pretty convinced that they don't relate that simply. In Edinburgh >>> >>>(while >>> >>>>not listening to the MyGrid presentation) I wrote down a rough >>>>explanation of what I think in this regard (and what I think that my >>>>work can contribute to SEEK and Kepler), and circulated to a small >>> >>>group >>> >>>>for initial feedback. I attach the document, which needs some >>> >>>patience >>> >>>>on your part. If you can bear with some dense writing with an Italian >>>>accent, I think you'll find similarities with what you propose, and >>> >>>I'd >>> >>>>love to hear what you think. >>>> >>>>Cheers, >>>>ferdinando >>>> >>>>On Tue, 2004-06-08 at 17:04, Deana Pennington wrote: >>>> >>>>>In thinking about the Kepler UI, it has occurred to me that it >>> >>>would >>> >>>>>really be nice if the ontologies that we construct to organize the >>>>>actors into categories, could also be used in a high-level >>> >>>workflow >>> >>>>>design phase. For example, in the niche modeling workflow, GARP, >>> >>>neural >>> >>>>>networks, GRASP and many other algorithms could be used for that >>> >>>one >>> >>>>>step in the workflow. Those algorithms would all be organized >>> >>>under >>> >>>>>some high-level hierarchy ("StatisticalModels"). Another example is >>> >>>the >>> >>>>>Pre-sample step, where we are using the GARP pre-sample algorithm, >>> >>>but >>> >>>>>other sampling algorithms could be substituted. There should be a >>>>>high-level "Sampling" concept, under which different sampling >>> >>>algorithms >>> >>>>>would be organized. During the design phase, the user could >>> >>>construct a >>> >>>>>workflow based on these high level concepts (Sampling and >>>>>StatisticalModel), then bind an actor (already implemented or >>> >>>using >>> >>>>>Chad's new actor) in a particular view of that workflow. So, a >>>>>workflow would be designed at a high conceptual level, and have >>> >>>multiple >>> >>>>>views, binding different algorithms, and those different views would >>> >>>be >>> >>>>>logically linked through the high level workflow. The immediate >>> >>>case is >>> >>>>>the GARP workflow we are designing will need another version for >>> >>>the >>> >>>>>neural network algorithm, and that version will be virtually an >>> >>>exact >>> >>>>>replicate except for that actor. Seems like it would be better to >>> >>>have >>> >>>>>one workflow with different views... >>>>> >>>>>I hope the above is coherent...in reading it, I'm not sure that it >>> >>>is :-) >>> >>>>>Deana >>>>> >>>> >>>>-- >>> >>>------------ >>>Edward A. Lee, Professor >>>518 Cory Hall, UC Berkeley, Berkeley, CA 94720 >>>phone: 510-642-0455, fax: 510-642-2739 >>>eal at eecs.Berkeley.EDU, http://ptolemy.eecs.berkeley.edu/~eal >>> >>>_______________________________________________ >>>seek-kr-sms mailing list >>>seek-kr-sms at ecoinformatics.org >>>http://www.ecoinformatics.org/mailman/listinfo/seek-kr-sms >>> From Serguei.Krivov at uvm.edu Thu Jun 10 10:57:29 2004 From: Serguei.Krivov at uvm.edu (Serguei Krivov) Date: Thu, 10 Jun 2004 13:57:29 -0400 Subject: [seek-kr-sms] growl: owl-dl or owl full?! In-Reply-To: <40C897B3.3050001@sdsc.edu> Message-ID: <000001c44f14$65667450$3ca6c684@BTS2K3000D5635086B> ...So, my overall suggestion, would be to figure out the necessary constructs for the target user group (which I'd be happy to help with), figure out how best to present these to the user (again, I'd be happy to help with this), then figure out if it is representable in OWL-Lite, OWL-DL (most likely), or OWL-Full (not likely). Shawn It would be really good to know all constructs that we need. May be each of us, who does ontology or DL design may send some references as one encounter the constructs of importance. Of course we shall use subclass of, instance of etc., the question is about more complex things. I'll be saving these messages and eventually we shall summarize that. My present questions however are more down to earth. For instance the present growl editor (see cvs) allows to draw subclass-of arrow from an instance to an instance or from a class to an instance. This is OK for OWL-full or for F-Logic but not OK for OWL-Dl. Besides , present graphic editor allows to do any wrong things, say- is-a arrows from a datatype to a datatype. I am trying to impose a very strict policy which would not allow to do any wrong things. The question is how strict it should be? Ferdinando, Shawn and I seems concur that it should exclude also owl-full constructs. Any other opinions? serguei Serguei Krivov wrote: > Hi All, > > I am working on growl editing and have an urgent design issue: > > Should we impose the editing discipline which would allow owl dl > constructs only and do nothing when user tries to make an owl full > construct? Some editors like oil-edit (that works with owl now) are > intolerant to owl full. Personally I think that this is right since > owl-full ontologies are difficult to use. OWLAPI seems also not really > happy to see owl-full constructs, it reports error, however somehow it > processes them. > > > > Ideally one can have a trigger which switch owl-dl discipline on and > off. But implementing such trigger would increase the editing code may > be 1.6 times comparing to making plain owl-dl discipline. I would leave > this for the future, but you guys may have other suggestions (?) > > Please let me know what you think. > > serguei > > > From ferdinando.villa at uvm.edu Thu Jun 10 11:09:07 2004 From: ferdinando.villa at uvm.edu (Ferdinando Villa) Date: Thu, 10 Jun 2004 14:09:07 -0400 Subject: [kepler-dev] Re: [seek-kr-sms] UI In-Reply-To: <40C89F92.9080404@sdsc.edu> References: <40C629D8.1080107@lternet.edu> <40C629D8.1080107@lternet.edu> <5.1.0.14.2.20040610081445.00bc5f18@mho.eecs.berkeley.edu> <1086887039.40c8947f7e935@mail.lternet.edu> <1086887659.4449.46.camel@basil.snr.uvm.edu> <40C89F92.9080404@sdsc.edu> Message-ID: <1086890946.4449.57.camel@basil.snr.uvm.edu> I like the top-down vs. the bottom-up distinction... Having no authorities to speak about the goals of SEEK, I would just add that I think there are definite advantages in being able to recognize, e.g., the Shannon index as a concrete subclass of the abstract "Ecological diversity" that sits in the core SEEK KB, both from the points of view of internal sotfware design and of communicating concepts and tools with our user base. Being a concrete class, you can use the Shannon index semantic type not only to tag a datases, but also to associate an implementation that can produce a workflow from a legal instance of it, defined in a suitable knowledge modelling environment, and whose complexity scales exactly with its conceptual definition - what the ecologist knows already, as opposed to the workflow's complexity. Also in terms of system maintenance, this characterization can use a more limited set of actors and would concentrate on extending the "tools" by extending the knowledge base rather than the processing environment. That's where I'm going with the IMA (with GrOWL as the modelling environment and with a lot of the "bottom-up" tools and concepts I learned from you guys) and I'm very much hoping that by cross-fertilizing approaches like this, we can find ourselves at the point of encounter before SEEK is over! Ciao ferdinando On Thu, 2004-06-10 at 13:51, Shawn Bowers wrote: > From my understanding of the goals of SEEK, I think what you describe > below Ferdinando is very much the ultimate goal. Although it has never > been formally stated, I think we are working "bottom up" to achieve the > goal; building the necessary infrastructure so operations as you > envision below can be realized. I believe that a driving factor for > developing the system bottom up is because we are charged with > "enlivening" legacy data and services. > > To do what you suggest, we need ontologies, we need ontology interfaces > and tools (so users can easily pose such questions, and generally work > at the ontology level), we need datasets, we need datasets annotated > with (or accessible via) ontologies (for retrieving, accessing, and > combining the relevant data), we need workflows/analysis steps (that can > be executed), and annotations for these as well (so they too can be > retrieved and appropriated executed). [Note that in SEEK, we haven't yet > proposed to model the calcuation of, e.g., a Shannon index, we have only > proposed annotating such processes with instances from an ontology > (e.g., that the process computes a biodiversity index), and semantically > describing the types of input and output the process consumes and > produces (i.e., that it takes proportional abundances, etc., the > description acting much like a database schema annotated with ontology > defs).] > > Just as a note, what you suggest is also very much related to planning > problems in AI (which obviously have some practical limitations). A > benefit of pursuing this bottom-up strategy in SEEK is that we may make > things easier for scientists and researchers, even if we cannot achieve > (e.g., either computationally or in a reasonable amount of time) the > scenario you describe below. > > > shawn > > Ferdinando Villa wrote: > > > It may be just semantic nitpicking, but I think what makes things > > complex are the names more than the concepts. As long as we model the > > Shannon index and not the process that calculates it, things are > > extremely simple. Instead of defining the analytical process that > > calculates the index, we recognize it as a concept and create an > > instance of it. Its definition (in the relevant ontology) guides the > > modeling and query process through all the required relationships. Then > > SMS/AMS - not the user - creates the "model". Can we envision what is > > the ultimate concept that the GARP process calculates? Distribution of a > > species? Modeling that concept (using a GARP-aware subclass of the base > > ecological concept) will guide the user through the retrieval of > > compatible data (incrementally narrowing the space/time context to > > search as new required data are added), then create a GARP pipeline and > > run it - and modeling a subclass of it that's defined in another way > > will create GARP version 2.... > > > > 2 more cents, of an Euro as always.... > > ferdinando > > > > On Thu, 2004-06-10 at 13:03, penningd at lternet.edu wrote: > > > >>I think these are all excellent ideas, that we should follow up on. These are > >>closely related to the whole UI issue. Ferdinando and I have talked about > >>trying to generate a prototype of his IMA approach using the GARP workflow. I > >>think we should do the same thing with GME. Or maybe, if we look at them > >>together, they are closely linked and could be used together. I think > >>"meta-model" really conveys the idea here, and that it is the level at which our > >>scientists are most likely to work. Generating a working model from a > >>meta-model seems to be the difficult step, but that's where semantically-created > >>transformation steps would be extremely useful. > >> > >>Deana > >> > >> > >>Quoting Edward A Lee : > >> > >> > >>>Apologies for my ignorance, but what is "IMA"? > >>> > >>>On a quick glance (hindered by lack of comprehension of TLA's), > >>>these ideas strike me as related to some work we've been doing > >>>on meta-modeling together with Vanderbilt University... The notion > >>>of meta-modeling is that one constructs models of families of models > >>>by specifying constraints on their static structure... Vanderbilt > >>>has a tool called GME (generic modeling environment) where a user > >>>specifies a meta model for a domain-specific modeling technique, > >>>and then GME synthesizes a customized visual editor that enforces > >>>those constraints. > >>> > >>>Conceivably we could build something similar in Kepler, where > >>>instead of building workflows, one constructs a meta model of a family > >>>of workflows... ? > >>> > >>>Just some random neuron firing triggered by Ferdinando's thoughts... > >>> > >>>Edward > >>> > >>> > >>>At 06:44 PM 6/8/2004 -0400, Ferdinando Villa wrote: > >>> > >>>>Hi Deana, > >>>> > >>>>I've started thinking along these lines some time ago, on the grounds > >>>>that modeling the high-level logical structure (rather than the > >>> > >>>workflow > >>> > >>>>with all its inputs, outputs and loops) may be all our typical user > >>> > >>>is > >>> > >>>>willing to do. Obviously I'm biased by interacting with my own user > >>>>community, but they're probably representative of the wider SEEK user > >>>>community. So I fully agree with you here. > >>>> > >>>>However, I don't think that we can achieve such an high-level > >>> > >>>paradigm > >>> > >>>>simply by augmenting the actors specifications. For the IMA I've done > >>> > >>>a > >>> > >>>>pretty thorough analysis of the relationship between the logical > >>>>structure of a model/pipeline/concept and the workflow that > >>> > >>>calculates > >>> > >>>>the states of the final "concept" you're after; as a result of that, > >>> > >>>I'm > >>> > >>>>pretty convinced that they don't relate that simply. In Edinburgh > >>> > >>>(while > >>> > >>>>not listening to the MyGrid presentation) I wrote down a rough > >>>>explanation of what I think in this regard (and what I think that my > >>>>work can contribute to SEEK and Kepler), and circulated to a small > >>> > >>>group > >>> > >>>>for initial feedback. I attach the document, which needs some > >>> > >>>patience > >>> > >>>>on your part. If you can bear with some dense writing with an Italian > >>>>accent, I think you'll find similarities with what you propose, and > >>> > >>>I'd > >>> > >>>>love to hear what you think. > >>>> > >>>>Cheers, > >>>>ferdinando > >>>> > >>>>On Tue, 2004-06-08 at 17:04, Deana Pennington wrote: > >>>> > >>>>>In thinking about the Kepler UI, it has occurred to me that it > >>> > >>>would > >>> > >>>>>really be nice if the ontologies that we construct to organize the > >>>>>actors into categories, could also be used in a high-level > >>> > >>>workflow > >>> > >>>>>design phase. For example, in the niche modeling workflow, GARP, > >>> > >>>neural > >>> > >>>>>networks, GRASP and many other algorithms could be used for that > >>> > >>>one > >>> > >>>>>step in the workflow. Those algorithms would all be organized > >>> > >>>under > >>> > >>>>>some high-level hierarchy ("StatisticalModels"). Another example is > >>> > >>>the > >>> > >>>>>Pre-sample step, where we are using the GARP pre-sample algorithm, > >>> > >>>but > >>> > >>>>>other sampling algorithms could be substituted. There should be a > >>>>>high-level "Sampling" concept, under which different sampling > >>> > >>>algorithms > >>> > >>>>>would be organized. During the design phase, the user could > >>> > >>>construct a > >>> > >>>>>workflow based on these high level concepts (Sampling and > >>>>>StatisticalModel), then bind an actor (already implemented or > >>> > >>>using > >>> > >>>>>Chad's new actor) in a particular view of that workflow. So, a > >>>>>workflow would be designed at a high conceptual level, and have > >>> > >>>multiple > >>> > >>>>>views, binding different algorithms, and those different views would > >>> > >>>be > >>> > >>>>>logically linked through the high level workflow. The immediate > >>> > >>>case is > >>> > >>>>>the GARP workflow we are designing will need another version for > >>> > >>>the > >>> > >>>>>neural network algorithm, and that version will be virtually an > >>> > >>>exact > >>> > >>>>>replicate except for that actor. Seems like it would be better to > >>> > >>>have > >>> > >>>>>one workflow with different views... > >>>>> > >>>>>I hope the above is coherent...in reading it, I'm not sure that it > >>> > >>>is :-) > >>> > >>>>>Deana > >>>>> > >>>> > >>>>-- > >>> > >>>------------ > >>>Edward A. Lee, Professor > >>>518 Cory Hall, UC Berkeley, Berkeley, CA 94720 > >>>phone: 510-642-0455, fax: 510-642-2739 > >>>eal at eecs.Berkeley.EDU, http://ptolemy.eecs.berkeley.edu/~eal > >>> > >>>_______________________________________________ > >>>seek-kr-sms mailing list > >>>seek-kr-sms at ecoinformatics.org > >>>http://www.ecoinformatics.org/mailman/listinfo/seek-kr-sms > >>> > > _______________________________________________ > kepler-dev mailing list > kepler-dev at ecoinformatics.org > http://www.ecoinformatics.org/mailman/listinfo/kepler-dev -- From bowers at sdsc.edu Thu Jun 10 11:15:47 2004 From: bowers at sdsc.edu (Shawn Bowers) Date: Thu, 10 Jun 2004 11:15:47 -0700 Subject: [kepler-dev] Re: [seek-kr-sms] UI In-Reply-To: <1086890946.4449.57.camel@basil.snr.uvm.edu> References: <40C629D8.1080107@lternet.edu> <40C629D8.1080107@lternet.edu> <5.1.0.14.2.20040610081445.00bc5f18@mho.eecs.berkeley.edu> <1086887039.40c8947f7e935@mail.lternet.edu> <1086887659.4449.46.camel@basil.snr.uvm.edu> <40C89F92.9080404@sdsc.edu> <1086890946.4449.57.camel@basil.snr.uvm.edu> Message-ID: <40C8A553.4090101@sdsc.edu> I probably don't have the authority to speak about the goals of SEEK either :-) shawn Ferdinando Villa wrote: > I like the top-down vs. the bottom-up distinction... Having no > authorities to speak about the goals of SEEK, I would just add that I > think there are definite advantages in being able to recognize, e.g., > the Shannon index as a concrete subclass of the abstract "Ecological > diversity" that sits in the core SEEK KB, both from the points of view > of internal sotfware design and of communicating concepts and tools with > our user base. Being a concrete class, you can use the Shannon index > semantic type not only to tag a datases, but also to associate an > implementation that can produce a workflow from a legal instance of it, > defined in a suitable knowledge modelling environment, and whose > complexity scales exactly with its conceptual definition - what the > ecologist knows already, as opposed to the workflow's complexity. Also > in terms of system maintenance, this characterization can use a more > limited set of actors and would concentrate on extending the "tools" by > extending the knowledge base rather than the processing environment. > That's where I'm going with the IMA (with GrOWL as the modelling > environment and with a lot of the "bottom-up" tools and concepts I > learned from you guys) and I'm very much hoping that by > cross-fertilizing approaches like this, we can find ourselves at the > point of encounter before SEEK is over! > > Ciao > ferdinando > > On Thu, 2004-06-10 at 13:51, Shawn Bowers wrote: > >> From my understanding of the goals of SEEK, I think what you describe >>below Ferdinando is very much the ultimate goal. Although it has never >>been formally stated, I think we are working "bottom up" to achieve the >>goal; building the necessary infrastructure so operations as you >>envision below can be realized. I believe that a driving factor for >>developing the system bottom up is because we are charged with >>"enlivening" legacy data and services. >> >>To do what you suggest, we need ontologies, we need ontology interfaces >>and tools (so users can easily pose such questions, and generally work >>at the ontology level), we need datasets, we need datasets annotated >>with (or accessible via) ontologies (for retrieving, accessing, and >>combining the relevant data), we need workflows/analysis steps (that can >>be executed), and annotations for these as well (so they too can be >>retrieved and appropriated executed). [Note that in SEEK, we haven't yet >>proposed to model the calcuation of, e.g., a Shannon index, we have only >>proposed annotating such processes with instances from an ontology >>(e.g., that the process computes a biodiversity index), and semantically >>describing the types of input and output the process consumes and >>produces (i.e., that it takes proportional abundances, etc., the >>description acting much like a database schema annotated with ontology >>defs).] >> >>Just as a note, what you suggest is also very much related to planning >>problems in AI (which obviously have some practical limitations). A >>benefit of pursuing this bottom-up strategy in SEEK is that we may make >>things easier for scientists and researchers, even if we cannot achieve >>(e.g., either computationally or in a reasonable amount of time) the >>scenario you describe below. >> >> >>shawn >> >>Ferdinando Villa wrote: >> >> >>>It may be just semantic nitpicking, but I think what makes things >>>complex are the names more than the concepts. As long as we model the >>>Shannon index and not the process that calculates it, things are >>>extremely simple. Instead of defining the analytical process that >>>calculates the index, we recognize it as a concept and create an >>>instance of it. Its definition (in the relevant ontology) guides the >>>modeling and query process through all the required relationships. Then >>>SMS/AMS - not the user - creates the "model". Can we envision what is >>>the ultimate concept that the GARP process calculates? Distribution of a >>>species? Modeling that concept (using a GARP-aware subclass of the base >>>ecological concept) will guide the user through the retrieval of >>>compatible data (incrementally narrowing the space/time context to >>>search as new required data are added), then create a GARP pipeline and >>>run it - and modeling a subclass of it that's defined in another way >>>will create GARP version 2.... >>> >>>2 more cents, of an Euro as always.... >>>ferdinando >>> >>>On Thu, 2004-06-10 at 13:03, penningd at lternet.edu wrote: >>> >>> >>>>I think these are all excellent ideas, that we should follow up on. These are >>>>closely related to the whole UI issue. Ferdinando and I have talked about >>>>trying to generate a prototype of his IMA approach using the GARP workflow. I >>>>think we should do the same thing with GME. Or maybe, if we look at them >>>>together, they are closely linked and could be used together. I think >>>>"meta-model" really conveys the idea here, and that it is the level at which our >>>>scientists are most likely to work. Generating a working model from a >>>>meta-model seems to be the difficult step, but that's where semantically-created >>>>transformation steps would be extremely useful. >>>> >>>>Deana >>>> >>>> >>>>Quoting Edward A Lee : >>>> >>>> >>>> >>>>>Apologies for my ignorance, but what is "IMA"? >>>>> >>>>>On a quick glance (hindered by lack of comprehension of TLA's), >>>>>these ideas strike me as related to some work we've been doing >>>>>on meta-modeling together with Vanderbilt University... The notion >>>>>of meta-modeling is that one constructs models of families of models >>>>>by specifying constraints on their static structure... Vanderbilt >>>>>has a tool called GME (generic modeling environment) where a user >>>>>specifies a meta model for a domain-specific modeling technique, >>>>>and then GME synthesizes a customized visual editor that enforces >>>>>those constraints. >>>>> >>>>>Conceivably we could build something similar in Kepler, where >>>>>instead of building workflows, one constructs a meta model of a family >>>>>of workflows... ? >>>>> >>>>>Just some random neuron firing triggered by Ferdinando's thoughts... >>>>> >>>>>Edward >>>>> >>>>> >>>>>At 06:44 PM 6/8/2004 -0400, Ferdinando Villa wrote: >>>>> >>>>> >>>>>>Hi Deana, >>>>>> >>>>>>I've started thinking along these lines some time ago, on the grounds >>>>>>that modeling the high-level logical structure (rather than the >>>>> >>>>>workflow >>>>> >>>>> >>>>>>with all its inputs, outputs and loops) may be all our typical user >>>>> >>>>>is >>>>> >>>>> >>>>>>willing to do. Obviously I'm biased by interacting with my own user >>>>>>community, but they're probably representative of the wider SEEK user >>>>>>community. So I fully agree with you here. >>>>>> >>>>>>However, I don't think that we can achieve such an high-level >>>>> >>>>>paradigm >>>>> >>>>> >>>>>>simply by augmenting the actors specifications. For the IMA I've done >>>>> >>>>>a >>>>> >>>>> >>>>>>pretty thorough analysis of the relationship between the logical >>>>>>structure of a model/pipeline/concept and the workflow that >>>>> >>>>>calculates >>>>> >>>>> >>>>>>the states of the final "concept" you're after; as a result of that, >>>>> >>>>>I'm >>>>> >>>>> >>>>>>pretty convinced that they don't relate that simply. In Edinburgh >>>>> >>>>>(while >>>>> >>>>> >>>>>>not listening to the MyGrid presentation) I wrote down a rough >>>>>>explanation of what I think in this regard (and what I think that my >>>>>>work can contribute to SEEK and Kepler), and circulated to a small >>>>> >>>>>group >>>>> >>>>> >>>>>>for initial feedback. I attach the document, which needs some >>>>> >>>>>patience >>>>> >>>>> >>>>>>on your part. If you can bear with some dense writing with an Italian >>>>>>accent, I think you'll find similarities with what you propose, and >>>>> >>>>>I'd >>>>> >>>>> >>>>>>love to hear what you think. >>>>>> >>>>>>Cheers, >>>>>>ferdinando >>>>>> >>>>>>On Tue, 2004-06-08 at 17:04, Deana Pennington wrote: >>>>>> >>>>>> >>>>>>>In thinking about the Kepler UI, it has occurred to me that it >>>>> >>>>>would >>>>> >>>>> >>>>>>>really be nice if the ontologies that we construct to organize the >>>>>>>actors into categories, could also be used in a high-level >>>>> >>>>>workflow >>>>> >>>>> >>>>>>>design phase. For example, in the niche modeling workflow, GARP, >>>>> >>>>>neural >>>>> >>>>> >>>>>>>networks, GRASP and many other algorithms could be used for that >>>>> >>>>>one >>>>> >>>>> >>>>>>>step in the workflow. Those algorithms would all be organized >>>>> >>>>>under >>>>> >>>>> >>>>>>>some high-level hierarchy ("StatisticalModels"). Another example is >>>>> >>>>>the >>>>> >>>>> >>>>>>>Pre-sample step, where we are using the GARP pre-sample algorithm, >>>>> >>>>>but >>>>> >>>>> >>>>>>>other sampling algorithms could be substituted. There should be a >>>>>>>high-level "Sampling" concept, under which different sampling >>>>> >>>>>algorithms >>>>> >>>>> >>>>>>>would be organized. During the design phase, the user could >>>>> >>>>>construct a >>>>> >>>>> >>>>>>>workflow based on these high level concepts (Sampling and >>>>>>>StatisticalModel), then bind an actor (already implemented or >>>>> >>>>>using >>>>> >>>>> >>>>>>>Chad's new actor) in a particular view of that workflow. So, a >>>>>>>workflow would be designed at a high conceptual level, and have >>>>> >>>>>multiple >>>>> >>>>> >>>>>>>views, binding different algorithms, and those different views would >>>>> >>>>>be >>>>> >>>>> >>>>>>>logically linked through the high level workflow. The immediate >>>>> >>>>>case is >>>>> >>>>> >>>>>>>the GARP workflow we are designing will need another version for >>>>> >>>>>the >>>>> >>>>> >>>>>>>neural network algorithm, and that version will be virtually an >>>>> >>>>>exact >>>>> >>>>> >>>>>>>replicate except for that actor. Seems like it would be better to >>>>> >>>>>have >>>>> >>>>> >>>>>>>one workflow with different views... >>>>>>> >>>>>>>I hope the above is coherent...in reading it, I'm not sure that it >>>>> >>>>>is :-) >>>>> >>>>> >>>>>>>Deana >>>>>>> >>>>>> >>>>>>-- >>>>> >>>>>------------ >>>>>Edward A. Lee, Professor >>>>>518 Cory Hall, UC Berkeley, Berkeley, CA 94720 >>>>>phone: 510-642-0455, fax: 510-642-2739 >>>>>eal at eecs.Berkeley.EDU, http://ptolemy.eecs.berkeley.edu/~eal >>>>> >>>>>_______________________________________________ >>>>>seek-kr-sms mailing list >>>>>seek-kr-sms at ecoinformatics.org >>>>>http://www.ecoinformatics.org/mailman/listinfo/seek-kr-sms >>>>> >> >>_______________________________________________ >>kepler-dev mailing list >>kepler-dev at ecoinformatics.org >>http://www.ecoinformatics.org/mailman/listinfo/kepler-dev From rwilliams at nceas.ucsb.edu Thu Jun 10 11:24:17 2004 From: rwilliams at nceas.ucsb.edu (Rich Williams) Date: Thu, 10 Jun 2004 11:24:17 -0700 Subject: [seek-kr-sms] growl: owl-dl or owl full?! In-Reply-To: <40C897B3.3050001@sdsc.edu> Message-ID: (I want to clarify that OWL is not necessarily stored in XML - the XML-RDF syntax is just the most commonly chosen syntax. You can store OWL (and RDF) in much less-verbose, non-XML syntaxes.) I agree that non-OWL-DL constructs should be avoided. The extreme flexibility of RDF and OWL-Full will make generic OWL-Full tools extremely difficult to develop. So far, the main thing that I have wanted to do that is outside OWL-DL is to have a property that takes a class as its value, rather than a class instance. This restriction in expressivity leads to some rather inelegant hacks to work around it and remain in OWL-DL. Another frequent issue is the lack of value restrictions on datatype properties, but I don't think that this is available in OWL-Full either. (One solution is to subtype the xml datatypes to restrict the range of permissable values, but no tools yet support this). While I use Protege, I would not claim that it has anything approaching an optimal user inteface, and I think that good visualization tools could play a role for the so-called knowledge engineer (knowledge-model engineer?). None of the graphical tools that I have experimented with have been better for me than the Protege tree and dialog-box based user interface, so that's what I use. As far as the GrOWL UI goes, I see no reason why it can't be like the user interface of many commercial software packages, where both entry-level and expert users are accomodated. There could be an easily-accessible set of commonly performed operations (creating subclasses, disjoint, object and datatype properties, some basic property restrictions etc), and the full expressivity of OWL-DL could also be available through "advanced" or "more" buttons in dialogs. Rich > -----Original Message----- > From: seek-kr-sms-admin at ecoinformatics.org > [mailto:seek-kr-sms-admin at ecoinformatics.org]On Behalf Of Shawn Bowers > Sent: Thursday, June 10, 2004 10:18 AM > To: Serguei Krivov > Cc: seek-kr-sms at ecoinformatics.org > Subject: Re: [seek-kr-sms] growl: owl-dl or owl full?! > > > Hi Serguei, > > If you look at the Protege data model, they have a language that offers > similar meta-modeling constructs as found in OWL-Full. > > In my opinion, the use of these constructs, unless you really know what > you are doing, can be confusing and often leads to incomprehensible > conceptual models. > > My general opinion is to not support similar constructs in GrOWL. > > But, it isn't clear to me at this point who the target user is of the > GrOWL onto editing and management tools. If it is scientists and other > domain experts, I think most of the OWL-DL and even OWL-Lite constructs > will be too much. For these users, I think we need to be very clear > about what modeling constructs we want to support (e.g., these > constructs may be at a "higher" level than OWL-DL constructs), > explicitly support the needed constructs through visual notations (not > OWL formulas); then figure out how those constructs are realized by > OWL-Lite or OWL-DL. Since GrOWL seems to be on track to output OWL > ontologies, these can be further edited by a knowledge "engineer" if > needed (to add more constraints). However, if the target user group is > knowledge engineers, e.g., Rich and the KR group, doesn't Protege > already offer the necessary interface? > > In general, the family of OWL standards are complex, with many modeling > constructs, and verbose, not only because OWL is stored via XML, but > also because it is based on RDF. I think there is a definate need for > ontology tools that do more than just expose OWL or any other DL -- like > XML, OWL is much better suited as a storage and exchange language, not > as an interface in and of itself for users. > > So, my overall suggestion, would be to figure out the necessary > constructs for the target user group (which I'd be happy to help with), > figure out how best to present these to the user (again, I'd be happy to > help with this), then figure out if it is representable in OWL-Lite, > OWL-DL (most likely), or OWL-Full (not likely). > > > shawn > > Serguei Krivov wrote: > > Hi All, > > > > I am working on growl editing and have an urgent design issue: > > > > Should we impose the editing discipline which would allow owl dl > > constructs only and do nothing when user tries to make an owl full > > construct? Some editors like oil-edit (that works with owl now) are > > intolerant to owl full. Personally I think that this is right since > > owl-full ontologies are difficult to use. OWLAPI seems also not really > > happy to see owl-full constructs, it reports error, however somehow it > > processes them. > > > > > > > > Ideally one can have a trigger which switch owl-dl discipline on and > > off. But implementing such trigger would increase the editing code may > > be 1.6 times comparing to making plain owl-dl discipline. I would leave > > this for the future, but you guys may have other suggestions (?) > > > > Please let me know what you think. > > > > serguei > > > > > > > > _______________________________________________ > seek-kr-sms mailing list > seek-kr-sms at ecoinformatics.org > http://www.ecoinformatics.org/mailman/listinfo/seek-kr-sms From rwilliams at nceas.ucsb.edu Thu Jun 10 11:24:53 2004 From: rwilliams at nceas.ucsb.edu (Rich Williams) Date: Thu, 10 Jun 2004 11:24:53 -0700 Subject: [seek-kr-sms] growl: owl-dl or owl full?! In-Reply-To: <40C897B3.3050001@sdsc.edu> Message-ID: (I want to clarify that OWL is not necessarily stored in XML - the XML-RDF syntax is just the most commonly chosen syntax. You can store OWL (and RDF) in much less-verbose, non-XML syntaxes.) I agree that non-OWL-DL constructs should be avoided. The extreme flexibility of RDF and OWL-Full will make generic OWL-Full tools extremely difficult to develop. So far, the main thing that I have wanted to do that is outside OWL-DL is to have a property that takes a class as its value, rather than a class instance. This restriction in expressivity leads to some rather inelegant hacks to work around it and remain in OWL-DL. Another frequent issue is the lack of value restrictions on datatype properties, but I don't think that this is available in OWL-Full either. (One solution is to subtype the xml datatypes to restrict the range of permissable values, but no tools yet support this). While I use Protege, I would not claim that it has anything approaching an optimal user inteface, and I think that good visualization tools could play a role for the so-called knowledge engineer (knowledge-model engineer?). None of the graphical tools that I have experimented with have been better for me than the Protege tree and dialog-box based user interface, so that's what I use. As far as the GrOWL UI goes, I see no reason why it can't be like the user interface of many commercial software packages, where both entry-level and expert users are accomodated. There could be an easily-accessible set of commonly performed operations (creating subclasses, disjoint, object and datatype properties, some basic property restrictions etc), and the full expressivity of OWL-DL could also be available through "advanced" or "more" buttons in dialogs. Rich > -----Original Message----- > From: seek-kr-sms-admin at ecoinformatics.org > [mailto:seek-kr-sms-admin at ecoinformatics.org]On Behalf Of Shawn Bowers > Sent: Thursday, June 10, 2004 10:18 AM > To: Serguei Krivov > Cc: seek-kr-sms at ecoinformatics.org > Subject: Re: [seek-kr-sms] growl: owl-dl or owl full?! > > > Hi Serguei, > > If you look at the Protege data model, they have a language that offers > similar meta-modeling constructs as found in OWL-Full. > > In my opinion, the use of these constructs, unless you really know what > you are doing, can be confusing and often leads to incomprehensible > conceptual models. > > My general opinion is to not support similar constructs in GrOWL. > > But, it isn't clear to me at this point who the target user is of the > GrOWL onto editing and management tools. If it is scientists and other > domain experts, I think most of the OWL-DL and even OWL-Lite constructs > will be too much. For these users, I think we need to be very clear > about what modeling constructs we want to support (e.g., these > constructs may be at a "higher" level than OWL-DL constructs), > explicitly support the needed constructs through visual notations (not > OWL formulas); then figure out how those constructs are realized by > OWL-Lite or OWL-DL. Since GrOWL seems to be on track to output OWL > ontologies, these can be further edited by a knowledge "engineer" if > needed (to add more constraints). However, if the target user group is > knowledge engineers, e.g., Rich and the KR group, doesn't Protege > already offer the necessary interface? > > In general, the family of OWL standards are complex, with many modeling > constructs, and verbose, not only because OWL is stored via XML, but > also because it is based on RDF. I think there is a definate need for > ontology tools that do more than just expose OWL or any other DL -- like > XML, OWL is much better suited as a storage and exchange language, not > as an interface in and of itself for users. > > So, my overall suggestion, would be to figure out the necessary > constructs for the target user group (which I'd be happy to help with), > figure out how best to present these to the user (again, I'd be happy to > help with this), then figure out if it is representable in OWL-Lite, > OWL-DL (most likely), or OWL-Full (not likely). > > > shawn > > Serguei Krivov wrote: > > Hi All, > > > > I am working on growl editing and have an urgent design issue: > > > > Should we impose the editing discipline which would allow owl dl > > constructs only and do nothing when user tries to make an owl full > > construct? Some editors like oil-edit (that works with owl now) are > > intolerant to owl full. Personally I think that this is right since > > owl-full ontologies are difficult to use. OWLAPI seems also not really > > happy to see owl-full constructs, it reports error, however somehow it > > processes them. > > > > > > > > Ideally one can have a trigger which switch owl-dl discipline on and > > off. But implementing such trigger would increase the editing code may > > be 1.6 times comparing to making plain owl-dl discipline. I would leave > > this for the future, but you guys may have other suggestions (?) > > > > Please let me know what you think. > > > > serguei > > > > > > > > _______________________________________________ > seek-kr-sms mailing list > seek-kr-sms at ecoinformatics.org > http://www.ecoinformatics.org/mailman/listinfo/seek-kr-sms From Serguei.Krivov at uvm.edu Thu Jun 10 11:49:52 2004 From: Serguei.Krivov at uvm.edu (Serguei Krivov) Date: Thu, 10 Jun 2004 14:49:52 -0400 Subject: [seek-kr-sms] growl: owl-dl or owl full?! In-Reply-To: Message-ID: <000001c44f1b$b6ce2110$3ca6c684@BTS2K3000D5635086B> Another frequent issue is the lack of value restrictions on datatype properties, but I don't think that this is available in OWL-Full either. (One solution is to subtype the xml datatypes to restrict the range of permissable values, but no tools yet support this). It is concrete domain expression and I do not think it is avalible in OWL-full. BUT,apparently Racer has some support for dataproperty range/value restrictions, See http://www.sts.tu-harburg.de/~r.f.moeller/racer/racer-manual-1-7-7.pdf Page 47 From eal at eecs.berkeley.edu Thu Jun 10 12:11:02 2004 From: eal at eecs.berkeley.edu (Edward A Lee) Date: Thu, 10 Jun 2004 12:11:02 -0700 Subject: [kepler-dev] Re: [seek-kr-sms] UI In-Reply-To: <1086886674.4449.37.camel@basil.snr.uvm.edu> References: <5.1.0.14.2.20040610081445.00bc5f18@mho.eecs.berkeley.edu> <40C629D8.1080107@lternet.edu> <40C629D8.1080107@lternet.edu> <5.1.0.14.2.20040610081445.00bc5f18@mho.eecs.berkeley.edu> Message-ID: <5.1.0.14.2.20040610121038.00bc5f18@mho.eecs.berkeley.edu> At 12:57 PM 6/10/2004 -0400, Ferdinando Villa wrote: >By the way - >what is TLA? :-) It's a "Three Letter Acronym" :-) Edward ------------ Edward A. Lee, Professor 518 Cory Hall, UC Berkeley, Berkeley, CA 94720 phone: 510-642-0455, fax: 510-642-2739 eal at eecs.Berkeley.EDU, http://ptolemy.eecs.berkeley.edu/~eal From franz at nceas.ucsb.edu Thu Jun 10 15:19:29 2004 From: franz at nceas.ucsb.edu (Nico M. Franz) Date: Thu, 10 Jun 2004 15:19:29 -0700 Subject: [kepler-dev] Re: [seek-kr-sms] UI In-Reply-To: <40C89F92.9080404@sdsc.edu> References: <1086887659.4449.46.camel@basil.snr.uvm.edu> <40C629D8.1080107@lternet.edu> <40C629D8.1080107@lternet.edu> <5.1.0.14.2.20040610081445.00bc5f18@mho.eecs.berkeley.edu> <1086887039.40c8947f7e935@mail.lternet.edu> <1086887659.4449.46.camel@basil.snr.uvm.edu> Message-ID: <5.2.1.1.2.20040610144840.017e9d50@hyperion.nceas.ucsb.edu> Hi there: There is parallel in the taxon group that works, as long as "bottom-up / top-down" is viewed as a handle for describing alternative points of attack, not as a cross-roads with no turn back. We're also trying to transfer and enliven the legacy on-line. Taxonomic names are very revisionary in the sense that they can only have one particular status (accepted OR rejected) and referent (set of entities being referred to) per author per time. Concepts (names *as used in* a particular reference), in turn, can convey various meanings of the same name through time. Our transfer schema (Edinburgh) and the new database structure being developed at KU intend to translate names into concepts, and manage their origins and relations properly. They're tools with which we hope to sway taxonomic providers to restructure their information in such a way that it can be placed on, and interact with, a larger taxonomic database network. For the moment, our own developing efforts are mostly "bottom-up." But "legacy" in taxonomy takes a deeper meaning. Historical taxonomic publications have indeed a "legal status." The codes of nomenclature promote a so-called principle of priority, which means that the first published use of a name - whether valid or not at this moment - MUST be considered in a revisionary publication. Per codes, good and bad names are forever to be considered, perhaps unlike good and bad ecological data. So this special legal status of taxonomic publications creates a "top-down" issue that seems to hit taxonomy particularly hard. Even if we had our technical stuff together within SEEK, we still need to present taxonomists with a long-term sustainable, socio-economic model that would instill confidence in them that their efforts are preserved as long as necessary - which is LONG. Taxonomists are kind of poor actually, and so they don't have one themselves. As a pragmatic move, I think we're first trying to involve institutions (such as USDA-ITIS) that already have a (perceived) long-term internet business up and running. Their databases may not be the fanciest, but they tend to last longer. Here we're making a necessary concession to "top-down" forces, even though "bottom-up" we can envision much more capable database systems to manage information. Nico At 10:51 AM 6/10/2004 -0700, Shawn Bowers wrote: > From my understanding of the goals of SEEK, I think what you describe > below Ferdinando is very much the ultimate goal. Although it has never > been formally stated, I think we are working "bottom up" to achieve the > goal; building the necessary infrastructure so operations as you envision > below can be realized. I believe that a driving factor for developing > the system bottom up is because we are charged with "enlivening" legacy > data and services. > >shawn [snip] Nico M. Franz National Center for Ecological Analysis and Synthesis 735 State Street, Suite 300 Santa Barbara, CA 93101 Phone: (805) 966-1677; Fax: (805) 892-2510; E-mail: franz at nceas.ucsb.edu Website: http://www.cals.cornell.edu/dept/entomology/wheeler/Franz/Nico.htm -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mercury.nceas.ucsb.edu/ecoinformatics/pipermail/seek-kr-sms/attachments/20040610/bcbe9bb8/attachment.htm From rods at ku.edu Fri Jun 11 07:04:17 2004 From: rods at ku.edu (Rod Spears) Date: Fri, 11 Jun 2004 09:04:17 -0500 Subject: [seek-kr-sms] UI In-Reply-To: <40C629D8.1080107@lternet.edu> References: <40C629D8.1080107@lternet.edu> Message-ID: <40C9BBE1.7010304@ku.edu> (This is a general reply to the entire thread that is on seek-kr-sms): In the end, there are really two very simple questions about what we are all doing on SEEK: 1) Can we make it work? a) This begs the question of "how" to make it work. 2) Will anybody use it? a) This begs the question of "can" anybody use it? Shawn is right when he says we are coming at this from the "bottom-up." SEEK has been very focused on the mechanics of how to take legacy data and modeling techniques and create a new environment to "house" them and better utilize them. In the end, if you can't answer question #1, it does matter whether you can answer question #2. But at the same time I have felt that we have been a little too focused on #1, or at the very least we haven't been spending enough time on question #2. Both Nico and Fernando touched on two very important aspects of what we are talking about. Nico's comment about attacking the problem from "both" ends (top down and bottom up) seems very appropriate. In fact, the more we know about the back-end the better we know what "tools" or functionality we have to develop for the front-end and how best they can interact. Fernando's comment touches on the core of what concerns me the most, and it is the realization of question #2 His comment: "/I also think that the major impediment to an understanding that requires a paradigm switch is the early idealization of a graphical user interface/." Or more appropriately known as "the seduction of the GUI." (Soon to be a Broadway play ;-) ). We absolutely have to create a tool that scientists can use. So this means we have to create a tool that "engages" the way they think about modeling problems. Note that I used the word "engage", meaning the tool doesn't to be an exact reflection of their process for creating a models and doing analysis, but if has to be close enough to make them want to "step up to the plate" and "take a swing for the fence" as it were. In many ways too, Fernando's comment touch on the the problem I have always had with Kepler. The UI is completely intertwined with the model definition and the analysis specification. It has nearly zero flexibility in how one "views" the "process" of entering in the model. (As a side note, the UI is one of the harder aspects of Kepler to tailor) In a perfect world of time and budgets it would be nice to create a tool that has standalone Modeling and Analysis Definition Language, then a core standalone analysis/simulation engine, and lastly a set of GUI tools that assist the scientists in creating the models and monitoring the execution. Notice how the GUI came last? The GUI needs to be born out of the underlying technology instead of defining it. I am a realist and I understand how much functionality Kepler brings to the table, it gives us such a head start in AMS. Maybe we need to start thinking about a more "conceptual" tool that fits in front of Kelper, but before that we need to really understand how the average scientist would approach the SEEK technology. I'll say this as a joke: "but that pretty much excludes any scientist working on SEEK," but it is true. Never let the folks creating the technology tell you how the technology should be used, that's the responsibility of the user. I know the word "use case" has been thrown around daily as if it were confetti, but I think the time is approaching where we need to really focus on developing some "real" end-user use cases. I think a much bigger effort and emphasis needs to be placed on the "top-down." And some of the ideas presented in this entire thread is a good start. Rod Deana Pennington wrote: > In thinking about the Kepler UI, it has occurred to me that it would > really be nice if the ontologies that we construct to organize the > actors into categories, could also be used in a high-level workflow > design phase. For example, in the niche modeling workflow, GARP, > neural networks, GRASP and many other algorithms could be used for > that one step in the workflow. Those algorithms would all be > organized under some high-level hierarchy ("StatisticalModels"). > Another example is the Pre-sample step, where we are using the GARP > pre-sample algorithm, but other sampling algorithms could be > substituted. There should be a high-level "Sampling" concept, under > which different sampling algorithms would be organized. During the > design phase, the user could construct a workflow based on these high > level concepts (Sampling and StatisticalModel), then bind an actor > (already implemented or using Chad's new actor) in a particular view > of that workflow. So, a workflow would be designed at a high > conceptual level, and have multiple views, binding different > algorithms, and those different views would be logically linked > through the high level workflow. The immediate case is the GARP > workflow we are designing will need another version for the neural > network algorithm, and that version will be virtually an exact > replicate except for that actor. Seems like it would be better to > have one workflow with different views... > > I hope the above is coherent...in reading it, I'm not sure that it is > :-) > > Deana > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mercury.nceas.ucsb.edu/ecoinformatics/pipermail/seek-kr-sms/attachments/20040611/d96552f6/attachment.htm From bowers at sdsc.edu Fri Jun 11 13:32:52 2004 From: bowers at sdsc.edu (Shawn Bowers) Date: Fri, 11 Jun 2004 13:32:52 -0700 Subject: [seek-dev] Re: [seek-kr-sms] UI In-Reply-To: <40C9BBE1.7010304@ku.edu> References: <40C629D8.1080107@lternet.edu> <40C9BBE1.7010304@ku.edu> Message-ID: <40CA16F4.9060204@sdsc.edu> Rod Spears wrote: > (This is a general reply to the entire thread that is on seek-kr-sms): > > In the end, there are really two very simple questions about what we are > all doing on SEEK: > > 1) Can we make it work? > a) This begs the question of "how" to make it work. > > 2) Will anybody use it? > a) This begs the question of "can" anybody use it? > > Shawn is right when he says we are coming at this from the "bottom-up." > SEEK has been very focused on the mechanics of how to take legacy data > and modeling techniques and create a new environment to "house" them and > better utilize them. In the end, if you can't answer question #1, it > does matter whether you can answer question #2. > > But at the same time I have felt that we have been a little too focused > on #1, or at the very least we haven't been spending enough time on > question #2. > > Both Nico and Fernando touched on two very important aspects of what we > are talking about. Nico's comment about attacking the problem from > "both" ends (top down and bottom up) seems very appropriate. In fact, > the more we know about the back-end the better we know what "tools" or > functionality we have to develop for the front-end and how best they can > interact. > > Fernando's comment touches on the core of what concerns me the most, and > it is the realization of question #2 > His comment: "/I also think that the major impediment to an > understanding that requires a paradigm switch is the early idealization > of a graphical user interface/." Or more appropriately known as "the > seduction of the GUI." (Soon to be a Broadway play ;-) ). > > We absolutely have to create a tool that scientists can use. So this > means we have to create a tool that "engages" the way they think about > modeling problems. Note that I used the word "engage", meaning the tool > doesn't to be an exact reflection of their process for creating a models > and doing analysis, but if has to be close enough to make them want to > "step up to the plate" and "take a swing for the fence" as it were. > > In many ways too, Fernando's comment touch on the the problem I have > always had with Kepler. The UI is completely intertwined with the model > definition and the analysis specification. It has nearly zero > flexibility in how one "views" the "process" of entering in the model. > (As a side note, the UI is one of the harder aspects of Kepler to tailor) > > In a perfect world of time and budgets it would be nice to create a tool > that has standalone Modeling and Analysis Definition Language, then a > core standalone analysis/simulation engine, and lastly a set of GUI > tools that assist the scientists in creating the models and monitoring > the execution. Notice how the GUI came last? The GUI needs to be born > out of the underlying technology instead of defining it. > > I am a realist and I understand how much functionality Kepler brings to > the table, it gives us such a head start in AMS. Maybe we need to start > thinking about a more "conceptual" tool that fits in front of Kelper, > but before that we need to really understand how the average scientist > would approach the SEEK technology. I'll say this as a joke: "but that > pretty much excludes any scientist working on SEEK," but it is true. > Never let the folks creating the technology tell you how the technology > should be used, that's the responsibility of the user. > > I know the word "use case" has been thrown around daily as if it were > confetti, but I think the time is approaching where we need to really > focus on developing some "real" end-user use cases. I think a much > bigger effort and emphasis needs to be placed on the "top-down." And > some of the ideas presented in this entire thread is a good start. Great synthesis and points Rod. (Note that I un-cc'd kepler-dev, since this discussion is very much seek-specific) I agree with you, Nico, and Ferdinando that we need top-down development (i.e., an understanding of the targeted user problems and needs, and how best to address these via end-user interfaces) as well as bottom-up development (underlying technology, etc.). I think that in general, we are at a point in the project where we have a good idea of the kinds of solutions we can provide (e.g., with EcoGrid, Kepler, SMS, Taxon, and so on). And, we are beginning to get to the point where we are building/needing user interfaces: we are beginning to design/implement add-ons to Kepler, e.g., for EcoGrid querying and Ontology-enabled actor/dataset browsing; GrOWL is becoming our user-interface for ontologies; we are designing a user interface for annotating actors and datasets (for datasets, there are also UIs such as Morhpo); and working on taxonomic browsing. I definately think that now in the project is a great time to take a step back, and as these interfaces are being designed and implemented (as well as the lower-level technology), to be informed by real user-needs. Here is what I think needs to be done to do an effective top-down design: 1. Clearly identify our target user group(s) and the general benefit we believe SEEK will provide to these groups. In particular, who are we developing the "SEEK system" for, and what are their problems/needs and constraints. Capture this as a report. (As an aside, it will be very hard to evaluate the utility of SEEK without understanding who it is meant to help, and how it is meant to help them.) 2. Assemble a representive group of target users. As Rod suggests, there should be participants that are independent of SEEK. [I attended one meeting that was close to this in Abq in Aug. 2003 -- have there been others?] 3. Identify the needs of the representive group in terms of SEEK. These might be best represented as "user stories" (i.e., scenarios) initially as opposed to use cases. I think there are two types of user stories that are extremely benefitial: (1) as a scenario of how some process works now, e.g., the story of a scientist that needed to run a niche model; (2) ask the user to tell us "how you would like the system to work" for the stories from 1. 4. Synthesize the user stories into a set of target use cases that touch a wide range of functionality. Develop and refine the use cases. 5. From the use cases and user constraints, design one or more "storyboard" user interfaces, or the needed user interface components from the use cases. At this point, there may be different possible interfaces, e.g., a high-level ontology based interface as suggested by Ferdinando and a low-level Kepler-based interface. This is where we need to be creative to address user needs. 6. Get feedback from the target users on the "storyboard" interfaces (i.e., let them evaluate the interfaces). Revisit the user stories via the storyboards. Refine the second part of 3, and iterate 5 and 6. 7. Develop one or more "prototypes" (i.e., the interface with canned functionality). Let the user group play with it, get feedback, and iterate. 8. The result should be "the" user interface. One of the most important parts of this process is to identify the desired characteristics of the target users, and to pick a representative group of users that can lead to the widest array of use-cases/user-stories that are most benefitial to the target users. For example, we have primarily focused on niche-modeling as the use case. (This isn't a great example, but bear with me) If our sample user group only consisted of scientists that did niche modeling, or if this were our target user group, we would probably build a user interface around, and specific to niche modeling (i.e., niche modeling should become an integral, and probably embedded, part of the interface). Of course, for us, this isn't necessarily true because we know we have a more general target user group. But, hopefully you get the point. shawn > > Rod > > > Deana Pennington wrote: > >> In thinking about the Kepler UI, it has occurred to me that it would >> really be nice if the ontologies that we construct to organize the >> actors into categories, could also be used in a high-level workflow >> design phase. For example, in the niche modeling workflow, GARP, >> neural networks, GRASP and many other algorithms could be used for >> that one step in the workflow. Those algorithms would all be >> organized under some high-level hierarchy ("StatisticalModels"). >> Another example is the Pre-sample step, where we are using the GARP >> pre-sample algorithm, but other sampling algorithms could be >> substituted. There should be a high-level "Sampling" concept, under >> which different sampling algorithms would be organized. During the >> design phase, the user could construct a workflow based on these high >> level concepts (Sampling and StatisticalModel), then bind an actor >> (already implemented or using Chad's new actor) in a particular view >> of that workflow. So, a workflow would be designed at a high >> conceptual level, and have multiple views, binding different >> algorithms, and those different views would be logically linked >> through the high level workflow. The immediate case is the GARP >> workflow we are designing will need another version for the neural >> network algorithm, and that version will be virtually an exact >> replicate except for that actor. Seems like it would be better to >> have one workflow with different views... >> >> I hope the above is coherent...in reading it, I'm not sure that it is >> :-) >> >> Deana >> >> > From dpennington at lternet.edu Fri Jun 11 13:46:43 2004 From: dpennington at lternet.edu (Deana Pennington) Date: Fri, 11 Jun 2004 14:46:43 -0600 Subject: [seek-dev] Re: [seek-kr-sms] UI In-Reply-To: <40CA16F4.9060204@sdsc.edu> References: <40C629D8.1080107@lternet.edu> <40C9BBE1.7010304@ku.edu> <40CA16F4.9060204@sdsc.edu> Message-ID: <40CA1A33.7090600@lternet.edu> Shawn & Rod, I think these are all great suggestions, and we've been discussimg putting together a group of ecologists for a couple of days of testing, but: 1) we thought that there are some major issues with the interface as it stands right now that need to be fixed before we try to get a group together, and 2) a decision needs to made about the useability engineer position, so that person can be involved right from the start in user testing and UI design So, I think we should table this discussion until the above 2 things are resolved. It's obvious that this needs to be addressed soon. Deana Shawn Bowers wrote: > > > Rod Spears wrote: > >> (This is a general reply to the entire thread that is on seek-kr-sms): >> >> In the end, there are really two very simple questions about what we >> are all doing on SEEK: >> >> 1) Can we make it work? >> a) This begs the question of "how" to make it work. >> >> 2) Will anybody use it? >> a) This begs the question of "can" anybody use it? >> >> Shawn is right when he says we are coming at this from the >> "bottom-up." SEEK has been very focused on the mechanics of how to >> take legacy data and modeling techniques and create a new environment >> to "house" them and better utilize them. In the end, if you can't >> answer question #1, it does matter whether you can answer question #2. >> >> But at the same time I have felt that we have been a little too >> focused on #1, or at the very least we haven't been spending enough >> time on question #2. >> >> Both Nico and Fernando touched on two very important aspects of what >> we are talking about. Nico's comment about attacking the problem from >> "both" ends (top down and bottom up) seems very appropriate. In >> fact, the more we know about the back-end the better we know what >> "tools" or functionality we have to develop for the front-end and how >> best they can interact. >> >> Fernando's comment touches on the core of what concerns me the most, >> and it is the realization of question #2 >> His comment: "/I also think that the major impediment to an >> understanding that requires a paradigm switch is the early >> idealization of a graphical user interface/." Or more appropriately >> known as "the seduction of the GUI." (Soon to be a Broadway play ;-) ). >> >> We absolutely have to create a tool that scientists can use. So this >> means we have to create a tool that "engages" the way they think >> about modeling problems. Note that I used the word "engage", meaning >> the tool doesn't to be an exact reflection of their process for >> creating a models and doing analysis, but if has to be close enough >> to make them want to "step up to the plate" and "take a swing for the >> fence" as it were. >> >> In many ways too, Fernando's comment touch on the the problem I have >> always had with Kepler. The UI is completely intertwined with the >> model definition and the analysis specification. It has nearly zero >> flexibility in how one "views" the "process" of entering in the >> model. (As a side note, the UI is one of the harder aspects of Kepler >> to tailor) >> >> In a perfect world of time and budgets it would be nice to create a >> tool that has standalone Modeling and Analysis Definition Language, >> then a core standalone analysis/simulation engine, and lastly a set >> of GUI tools that assist the scientists in creating the models and >> monitoring the execution. Notice how the GUI came last? The GUI needs >> to be born out of the underlying technology instead of defining it. >> >> I am a realist and I understand how much functionality Kepler brings >> to the table, it gives us such a head start in AMS. Maybe we need to >> start thinking about a more "conceptual" tool that fits in front of >> Kelper, but before that we need to really understand how the average >> scientist would approach the SEEK technology. I'll say this as a >> joke: "but that pretty much excludes any scientist working on SEEK," >> but it is true. Never let the folks creating the technology tell you >> how the technology should be used, that's the responsibility of the >> user. >> >> I know the word "use case" has been thrown around daily as if it were >> confetti, but I think the time is approaching where we need to really >> focus on developing some "real" end-user use cases. I think a much >> bigger effort and emphasis needs to be placed on the "top-down." And >> some of the ideas presented in this entire thread is a good start. > > > Great synthesis and points Rod. > > (Note that I un-cc'd kepler-dev, since this discussion is very much > seek-specific) > > I agree with you, Nico, and Ferdinando that we need top-down > development (i.e., an understanding of the targeted user problems and > needs, and how best to address these via end-user interfaces) as well > as bottom-up development (underlying technology, etc.). > > I think that in general, we are at a point in the project where we > have a good idea of the kinds of solutions we can provide (e.g., with > EcoGrid, Kepler, SMS, Taxon, and so on). > > And, we are beginning to get to the point where we are > building/needing user interfaces: we are beginning to design/implement > add-ons to Kepler, e.g., for EcoGrid querying and Ontology-enabled > actor/dataset browsing; GrOWL is becoming our user-interface for > ontologies; we are designing a user interface for annotating actors > and datasets (for datasets, there are also UIs such as Morhpo); and > working on taxonomic browsing. > > I definately think that now in the project is a great time to take a > step back, and as these interfaces are being designed and implemented > (as well as the lower-level technology), to be informed by real > user-needs. > > > Here is what I think needs to be done to do an effective top-down design: > > 1. Clearly identify our target user group(s) and the general benefit > we believe SEEK will provide to these groups. In particular, who are > we developing the "SEEK system" for, and what are their problems/needs > and constraints. Capture this as a report. (As an aside, it will be > very hard to evaluate the utility of SEEK without understanding who it > is meant to help, and how it is meant to help them.) > > 2. Assemble a representive group of target users. As Rod suggests, > there should be participants that are independent of SEEK. [I attended > one meeting that was close to this in Abq in Aug. 2003 -- have there > been others?] > > 3. Identify the needs of the representive group in terms of SEEK. > These might be best represented as "user stories" (i.e., scenarios) > initially as opposed to use cases. I think there are two types of > user stories that are extremely benefitial: (1) as a scenario of how > some process works now, e.g., the story of a scientist that needed to > run a niche model; (2) ask the user to tell us "how you would like the > system to work" for the stories from 1. > > 4. Synthesize the user stories into a set of target use cases that > touch a wide range of functionality. Develop and refine the use cases. > > 5. From the use cases and user constraints, design one or more > "storyboard" user interfaces, or the needed user interface components > from the use cases. At this point, there may be different possible > interfaces, e.g., a high-level ontology based interface as suggested > by Ferdinando and a low-level Kepler-based interface. This is where > we need to be creative to address user needs. > > 6. Get feedback from the target users on the "storyboard" interfaces > (i.e., let them evaluate the interfaces). Revisit the user stories via > the storyboards. Refine the second part of 3, and iterate 5 and 6. > > 7. Develop one or more "prototypes" (i.e., the interface with canned > functionality). Let the user group play with it, get feedback, and > iterate. > > 8. The result should be "the" user interface. > > > One of the most important parts of this process is to identify the > desired characteristics of the target users, and to pick a > representative group of users that can lead to the widest array of > use-cases/user-stories that are most benefitial to the target users. > > For example, we have primarily focused on niche-modeling as the use > case. (This isn't a great example, but bear with me) If our sample > user group only consisted of scientists that did niche modeling, or if > this were our target user group, we would probably build a user > interface around, and specific to niche modeling (i.e., niche modeling > should become an integral, and probably embedded, part of the > interface). Of course, for us, this isn't necessarily true because we > know we have a more general target user group. But, hopefully you get > the point. > > > shawn > > >> >> Rod >> >> >> Deana Pennington wrote: >> >>> In thinking about the Kepler UI, it has occurred to me that it would >>> really be nice if the ontologies that we construct to organize the >>> actors into categories, could also be used in a high-level workflow >>> design phase. For example, in the niche modeling workflow, GARP, >>> neural networks, GRASP and many other algorithms could be used for >>> that one step in the workflow. Those algorithms would all be >>> organized under some high-level hierarchy ("StatisticalModels"). >>> Another example is the Pre-sample step, where we are using the GARP >>> pre-sample algorithm, but other sampling algorithms could be >>> substituted. There should be a high-level "Sampling" concept, under >>> which different sampling algorithms would be organized. During the >>> design phase, the user could construct a workflow based on these >>> high level concepts (Sampling and StatisticalModel), then bind an >>> actor (already implemented or using Chad's new actor) in a >>> particular view of that workflow. So, a workflow would be designed >>> at a high conceptual level, and have multiple views, binding >>> different algorithms, and those different views would be logically >>> linked through the high level workflow. The immediate case is the >>> GARP workflow we are designing will need another version for the >>> neural network algorithm, and that version will be virtually an >>> exact replicate except for that actor. Seems like it would be >>> better to have one workflow with different views... >>> >>> I hope the above is coherent...in reading it, I'm not sure that it >>> is :-) >>> >>> Deana >>> >>> >> > > _______________________________________________ > seek-dev mailing list > seek-dev at ecoinformatics.org > http://www.ecoinformatics.org/mailman/listinfo/seek-dev -- ******** Deana D. Pennington, PhD Long-term Ecological Research Network Office UNM Biology Department MSC03 2020 1 University of New Mexico Albuquerque, NM 87131-0001 505-272-7288 (office) 505 272-7080 (fax) From franz at nceas.ucsb.edu Fri Jun 11 16:29:38 2004 From: franz at nceas.ucsb.edu (Nico M. Franz) Date: Fri, 11 Jun 2004 16:29:38 -0700 Subject: [seek-dev] Re: [seek-kr-sms] UI In-Reply-To: <40CA16F4.9060204@sdsc.edu> References: <40C9BBE1.7010304@ku.edu> <40C629D8.1080107@lternet.edu> <40C9BBE1.7010304@ku.edu> Message-ID: <5.2.1.1.2.20040611153007.017f1f98@hyperion.nceas.ucsb.edu> To Shawn: I think item 9. (or 1.?) on your list ought to read: "invite potential sponsors and users on a two-week cruise to the Bahamas." But then perhaps academics' role in society is to be a bit more idealistic.. To all: I want to make it clear that in my previous e-mail, the "top" for taxonomy wasn't a group of scientists or a set of relevant conceptions or practices (however fixed or flexible). In that sense I abducted Ferdinando's and Shawn's original points for different purposes. I truly think that the taxon group's top constraint is one related to long-term persistence, and to the economic solutions required to provide this persistence. I have personally rather quickly given up on the idea of lobbying our ideas about taxonomic concepts with the expert community by approaching them directly. The ideas may well be brilliant, very useful and highly necessary. But the experts will rightly ask: "how long will my contributions last on-line, and who will pay for their long-term maintenance." If we cannot give satisfactory answers to these questions (and possibly some experts will be satisfied later than others), then we should not expect the experts to join in en mass. Doing real taxonomic work on-line (e.g. by connecting concepts from different classificatory scheme through synonymies) just can't be ephemeral. It needs to stand on a socio-economic foundation (nearly) as solid as the print publishing/library/museum conservation process. Otherwise, we implicitly force experts to invest their careers and expertise in ventures whose futures are not sufficiently clear to even meet the requirements of the Codes, much less a sustained and efficient communication about taxa. At best we'll lure some experts that are so secure academically that they can afford to put things on-line and risk that it will be lost soon. This work might not enter in the taxonomic legacy, and it might compromise their situation when applying for positions and funding. It's close to a non-starter, and we have evidence in the field already to support this conclusion. I do not know to what extent these issues apply to LTER-related aims. As I said before my hunch is that taxonomy has less money and more inherent demands for long-term persistence. This makes it harder and explains the tediousness of the process, and our awareness of it. So what are we trying to do in the taxon group? I think we're like a think tank that NSF funds at the moment though we're really working for a taxonomic service organization - yet to be named - with perceived long-term funding. It just might be USDA-ITIS. It might be a large scientific publisher like Thomson. It won't be any group of scientists, maybe not even a consortium of museums. The implementation of our ideas won't be exactly democratic, or by consensus among many experts. It will be (slightly) forceful and unilateral. As far as the science goes, the system won't be perfect or even good at the start. But it will come with economic respectability. The rest may eventually follow. To even speak with a taxonomic service provider that can look out for its own survival, we need some bottom-up work. We need arguments that will make the provider think that our bottom-up package provides a cognitive and thus economic edge over competitors. Once the persistence issues are adequately addressed, we expect users to honor our unilateral efforts, since we jumped over a hurdle nobody else has taken so far. The platform can then deepen and expand. The key issue is to understand and minimize the risk of it ever going away. That's a "top", it's just not a purely academic one. Of course I wish that had all been understood long ago and addressed by now. But even the scientific publishers - who are loaded and do nothing but worry about the economics of internet publishing and maintenance of information - are fairly new to the business of putting their money only on-line. I believe that their and taxonomy's interests and requirements are in many ways alike. Nico At 01:32 PM 6/11/2004 -0700, Shawn Bowers wrote: >Rod Spears wrote: > >>(This is a general reply to the entire thread that is on seek-kr-sms): >>In the end, there are really two very simple questions about what we are >>all doing on SEEK: >>1) Can we make it work? >> a) This begs the question of "how" to make it work. >>2) Will anybody use it? >> a) This begs the question of "can" anybody use it? >>Shawn is right when he says we are coming at this from the "bottom-up." >>SEEK has been very focused on the mechanics of how to take legacy data >>and modeling techniques and create a new environment to "house" them and >>better utilize them. In the end, if you can't answer question #1, it does >>matter whether you can answer question #2. >>But at the same time I have felt that we have been a little too focused >>on #1, or at the very least we haven't been spending enough time on >>question #2. >>Both Nico and Fernando touched on two very important aspects of what we >>are talking about. Nico's comment about attacking the problem from "both" >>ends (top down and bottom up) seems very appropriate. In fact, the more >>we know about the back-end the better we know what "tools" or >>functionality we have to develop for the front-end and how best they can >>interact. >>Fernando's comment touches on the core of what concerns me the most, and >>it is the realization of question #2 >>His comment: "/I also think that the major impediment to an understanding >>that requires a paradigm switch is the early idealization of a graphical >>user interface/." Or more appropriately known as "the seduction of the >>GUI." (Soon to be a Broadway play ;-) ). >>We absolutely have to create a tool that scientists can use. So this >>means we have to create a tool that "engages" the way they think about >>modeling problems. Note that I used the word "engage", meaning the tool >>doesn't to be an exact reflection of their process for creating a models >>and doing analysis, but if has to be close enough to make them want to >>"step up to the plate" and "take a swing for the fence" as it were. >>In many ways too, Fernando's comment touch on the the problem I have >>always had with Kepler. The UI is completely intertwined with the model >>definition and the analysis specification. It has nearly zero flexibility >>in how one "views" the "process" of entering in the model. (As a side >>note, the UI is one of the harder aspects of Kepler to tailor) >>In a perfect world of time and budgets it would be nice to create a tool >>that has standalone Modeling and Analysis Definition Language, then a >>core standalone analysis/simulation engine, and lastly a set of GUI tools >>that assist the scientists in creating the models and monitoring the >>execution. Notice how the GUI came last? The GUI needs to be born out of >>the underlying technology instead of defining it. >>I am a realist and I understand how much functionality Kepler brings to >>the table, it gives us such a head start in AMS. Maybe we need to start >>thinking about a more "conceptual" tool that fits in front of Kelper, but >>before that we need to really understand how the average scientist would >>approach the SEEK technology. I'll say this as a joke: "but that pretty >>much excludes any scientist working on SEEK," but it is true. Never let >>the folks creating the technology tell you how the technology should be >>used, that's the responsibility of the user. >>I know the word "use case" has been thrown around daily as if it were >>confetti, but I think the time is approaching where we need to really >>focus on developing some "real" end-user use cases. I think a much bigger >>effort and emphasis needs to be placed on the "top-down." And some of the >>ideas presented in this entire thread is a good start. > >Great synthesis and points Rod. > >(Note that I un-cc'd kepler-dev, since this discussion is very much >seek-specific) > >I agree with you, Nico, and Ferdinando that we need top-down development >(i.e., an understanding of the targeted user problems and needs, and how >best to address these via end-user interfaces) as well as bottom-up >development (underlying technology, etc.). > >I think that in general, we are at a point in the project where we have a >good idea of the kinds of solutions we can provide (e.g., with EcoGrid, >Kepler, SMS, Taxon, and so on). > >And, we are beginning to get to the point where we are building/needing >user interfaces: we are beginning to design/implement add-ons to Kepler, >e.g., for EcoGrid querying and Ontology-enabled actor/dataset browsing; >GrOWL is becoming our user-interface for ontologies; we are designing a >user interface for annotating actors and datasets (for datasets, there are >also UIs such as Morhpo); and working on taxonomic browsing. > >I definately think that now in the project is a great time to take a step >back, and as these interfaces are being designed and implemented (as well >as the lower-level technology), to be informed by real user-needs. > > >Here is what I think needs to be done to do an effective top-down design: > >1. Clearly identify our target user group(s) and the general benefit we >believe SEEK will provide to these groups. In particular, who are we >developing the "SEEK system" for, and what are their problems/needs and >constraints. Capture this as a report. (As an aside, it will be very hard >to evaluate the utility of SEEK without understanding who it is meant to >help, and how it is meant to help them.) > >2. Assemble a representive group of target users. As Rod suggests, there >should be participants that are independent of SEEK. [I attended one >meeting that was close to this in Abq in Aug. 2003 -- have there been others?] > >3. Identify the needs of the representive group in terms of SEEK. These >might be best represented as "user stories" (i.e., scenarios) initially as >opposed to use cases. I think there are two types of user stories that >are extremely benefitial: (1) as a scenario of how some process works now, >e.g., the story of a scientist that needed to run a niche model; (2) ask >the user to tell us "how you would like the system to work" for the >stories from 1. > >4. Synthesize the user stories into a set of target use cases that touch a >wide range of functionality. Develop and refine the use cases. > >5. From the use cases and user constraints, design one or more >"storyboard" user interfaces, or the needed user interface components from >the use cases. At this point, there may be different possible interfaces, >e.g., a high-level ontology based interface as suggested by Ferdinando and >a low-level Kepler-based interface. This is where we need to be creative >to address user needs. > >6. Get feedback from the target users on the "storyboard" interfaces >(i.e., let them evaluate the interfaces). Revisit the user stories via the >storyboards. Refine the second part of 3, and iterate 5 and 6. > >7. Develop one or more "prototypes" (i.e., the interface with canned >functionality). Let the user group play with it, get feedback, and iterate. > >8. The result should be "the" user interface. > > >One of the most important parts of this process is to identify the desired >characteristics of the target users, and to pick a representative group of >users that can lead to the widest array of use-cases/user-stories that are >most benefitial to the target users. > >For example, we have primarily focused on niche-modeling as the use case. >(This isn't a great example, but bear with me) If our sample user group >only consisted of scientists that did niche modeling, or if this were our >target user group, we would probably build a user interface around, and >specific to niche modeling (i.e., niche modeling should become an >integral, and probably embedded, part of the interface). Of course, for >us, this isn't necessarily true because we know we have a more general >target user group. But, hopefully you get the point. > > >shawn > > >>Rod >> >>Deana Pennington wrote: >> >>>In thinking about the Kepler UI, it has occurred to me that it would >>>really be nice if the ontologies that we construct to organize the >>>actors into categories, could also be used in a high-level workflow >>>design phase. For example, in the niche modeling workflow, GARP, neural >>>networks, GRASP and many other algorithms could be used for that one >>>step in the workflow. Those algorithms would all be organized under >>>some high-level hierarchy ("StatisticalModels"). >>>Another example is the Pre-sample step, where we are using the GARP >>>pre-sample algorithm, but other sampling algorithms could be >>>substituted. There should be a high-level "Sampling" concept, under >>>which different sampling algorithms would be organized. During the >>>design phase, the user could construct a workflow based on these high >>>level concepts (Sampling and StatisticalModel), then bind an actor >>>(already implemented or using Chad's new actor) in a particular view of >>>that workflow. So, a workflow would be designed at a high conceptual >>>level, and have multiple views, binding different algorithms, and those >>>different views would be logically linked through the high level >>>workflow. The immediate case is the GARP workflow we are designing will >>>need another version for the neural network algorithm, and that version >>>will be virtually an exact replicate except for that actor. Seems like >>>it would be better to have one workflow with different views... >>> >>>I hope the above is coherent...in reading it, I'm not sure that it is >>>:-) >>> >>>Deana >>> > >_______________________________________________ >seek-kr-sms mailing list >seek-kr-sms at ecoinformatics.org >http://www.ecoinformatics.org/mailman/listinfo/seek-kr-sms -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mercury.nceas.ucsb.edu/ecoinformatics/pipermail/seek-kr-sms/attachments/20040611/564675fa/attachment.htm From eal at eecs.berkeley.edu Sat Jun 12 07:59:40 2004 From: eal at eecs.berkeley.edu (Edward A Lee) Date: Sat, 12 Jun 2004 07:59:40 -0700 Subject: [kepler-dev] Re: [seek-kr-sms] UI In-Reply-To: <40C9BBE1.7010304@ku.edu> References: <40C629D8.1080107@lternet.edu> <40C629D8.1080107@lternet.edu> Message-ID: <5.1.0.14.2.20040612074804.00bc5f18@mho.eecs.berkeley.edu> At 09:04 AM 6/11/2004 -0500, Rod Spears wrote: >In a perfect world of time and budgets it would be nice to create a tool >that has standalone Modeling and Analysis Definition Language, then a core >standalone analysis/simulation engine, and lastly a set of GUI tools that >assist the scientists in creating the models and monitoring the execution. >Notice how the GUI came last? The GUI needs to be born out of the >underlying technology instead of defining it. If I can chime in... I believe that current work in Kepler _is_ that of creating a Modeling and Analysis Definition Language. The fact that it has a visual syntax does not make it any less a language. In fact, I would argue that when one is thinking about concurrent and distributed models, focusing on a visual syntax can be liberating, because it helps us break out of the procedural mode that prevails in textual languages. That said, I think we haven't done enough free thinking about what the semantics of this language can be... E.g., it may not always make sense to wrap web services in a process network actor, since for some web services, the metaphor of streaming data through the web service may not make sense... Web services are typically defined in terms of functions that are invoked remotely... In view of this, we are working on a mechanism that is intended to parallel the actor package and its domains, which are all oriented around streaming data through components. For lack of a better term, we are calling this the "component" package, because it bears a basic similarity with standard software component architectures like CORBA and DCOM, but our intent is to provide domains built on this "component" package that provide simple and understandable concurrency models. We have been inspired by nesC (a language used for programming sensor nodes in sensor networks) and Click (a language used for defining software-based network routers). Edward ------------ Edward A. Lee, Professor 518 Cory Hall, UC Berkeley, Berkeley, CA 94720 phone: 510-642-0455, fax: 510-642-2739 eal at eecs.Berkeley.EDU, http://ptolemy.eecs.berkeley.edu/~eal From rods at ku.edu Mon Jun 14 05:48:56 2004 From: rods at ku.edu (Rod Spears) Date: Mon, 14 Jun 2004 07:48:56 -0500 Subject: [seek-dev] Re: [seek-kr-sms] UI In-Reply-To: <40CA16F4.9060204@sdsc.edu> References: <40C629D8.1080107@lternet.edu> <40C9BBE1.7010304@ku.edu> <40CA16F4.9060204@sdsc.edu> Message-ID: <40CD9EB8.9060906@ku.edu> I pretty much think Shawn has nailed it. This is exactly where we need to go. Rod Shawn Bowers wrote: > > > Rod Spears wrote: > >> (This is a general reply to the entire thread that is on seek-kr-sms): >> >> In the end, there are really two very simple questions about what we >> are all doing on SEEK: >> >> 1) Can we make it work? >> a) This begs the question of "how" to make it work. >> >> 2) Will anybody use it? >> a) This begs the question of "can" anybody use it? >> >> Shawn is right when he says we are coming at this from the >> "bottom-up." SEEK has been very focused on the mechanics of how to >> take legacy data and modeling techniques and create a new environment >> to "house" them and better utilize them. In the end, if you can't >> answer question #1, it does matter whether you can answer question #2. >> >> But at the same time I have felt that we have been a little too >> focused on #1, or at the very least we haven't been spending enough >> time on question #2. >> >> Both Nico and Fernando touched on two very important aspects of what >> we are talking about. Nico's comment about attacking the problem from >> "both" ends (top down and bottom up) seems very appropriate. In >> fact, the more we know about the back-end the better we know what >> "tools" or functionality we have to develop for the front-end and how >> best they can interact. >> >> Fernando's comment touches on the core of what concerns me the most, >> and it is the realization of question #2 >> His comment: "/I also think that the major impediment to an >> understanding that requires a paradigm switch is the early >> idealization of a graphical user interface/." Or more appropriately >> known as "the seduction of the GUI." (Soon to be a Broadway play ;-) ). >> >> We absolutely have to create a tool that scientists can use. So this >> means we have to create a tool that "engages" the way they think >> about modeling problems. Note that I used the word "engage", meaning >> the tool doesn't to be an exact reflection of their process for >> creating a models and doing analysis, but if has to be close enough >> to make them want to "step up to the plate" and "take a swing for the >> fence" as it were. >> >> In many ways too, Fernando's comment touch on the the problem I have >> always had with Kepler. The UI is completely intertwined with the >> model definition and the analysis specification. It has nearly zero >> flexibility in how one "views" the "process" of entering in the >> model. (As a side note, the UI is one of the harder aspects of Kepler >> to tailor) >> >> In a perfect world of time and budgets it would be nice to create a >> tool that has standalone Modeling and Analysis Definition Language, >> then a core standalone analysis/simulation engine, and lastly a set >> of GUI tools that assist the scientists in creating the models and >> monitoring the execution. Notice how the GUI came last? The GUI needs >> to be born out of the underlying technology instead of defining it. >> >> I am a realist and I understand how much functionality Kepler brings >> to the table, it gives us such a head start in AMS. Maybe we need to >> start thinking about a more "conceptual" tool that fits in front of >> Kelper, but before that we need to really understand how the average >> scientist would approach the SEEK technology. I'll say this as a >> joke: "but that pretty much excludes any scientist working on SEEK," >> but it is true. Never let the folks creating the technology tell you >> how the technology should be used, that's the responsibility of the >> user. >> >> I know the word "use case" has been thrown around daily as if it were >> confetti, but I think the time is approaching where we need to really >> focus on developing some "real" end-user use cases. I think a much >> bigger effort and emphasis needs to be placed on the "top-down." And >> some of the ideas presented in this entire thread is a good start. > > > Great synthesis and points Rod. > > (Note that I un-cc'd kepler-dev, since this discussion is very much > seek-specific) > > I agree with you, Nico, and Ferdinando that we need top-down > development (i.e., an understanding of the targeted user problems and > needs, and how best to address these via end-user interfaces) as well > as bottom-up development (underlying technology, etc.). > > I think that in general, we are at a point in the project where we > have a good idea of the kinds of solutions we can provide (e.g., with > EcoGrid, Kepler, SMS, Taxon, and so on). > > And, we are beginning to get to the point where we are > building/needing user interfaces: we are beginning to design/implement > add-ons to Kepler, e.g., for EcoGrid querying and Ontology-enabled > actor/dataset browsing; GrOWL is becoming our user-interface for > ontologies; we are designing a user interface for annotating actors > and datasets (for datasets, there are also UIs such as Morhpo); and > working on taxonomic browsing. > > I definately think that now in the project is a great time to take a > step back, and as these interfaces are being designed and implemented > (as well as the lower-level technology), to be informed by real > user-needs. > > > Here is what I think needs to be done to do an effective top-down design: > > 1. Clearly identify our target user group(s) and the general benefit > we believe SEEK will provide to these groups. In particular, who are > we developing the "SEEK system" for, and what are their problems/needs > and constraints. Capture this as a report. (As an aside, it will be > very hard to evaluate the utility of SEEK without understanding who it > is meant to help, and how it is meant to help them.) > > 2. Assemble a representive group of target users. As Rod suggests, > there should be participants that are independent of SEEK. [I attended > one meeting that was close to this in Abq in Aug. 2003 -- have there > been others?] > > 3. Identify the needs of the representive group in terms of SEEK. > These might be best represented as "user stories" (i.e., scenarios) > initially as opposed to use cases. I think there are two types of > user stories that are extremely benefitial: (1) as a scenario of how > some process works now, e.g., the story of a scientist that needed to > run a niche model; (2) ask the user to tell us "how you would like the > system to work" for the stories from 1. > > 4. Synthesize the user stories into a set of target use cases that > touch a wide range of functionality. Develop and refine the use cases. > > 5. From the use cases and user constraints, design one or more > "storyboard" user interfaces, or the needed user interface components > from the use cases. At this point, there may be different possible > interfaces, e.g., a high-level ontology based interface as suggested > by Ferdinando and a low-level Kepler-based interface. This is where > we need to be creative to address user needs. > > 6. Get feedback from the target users on the "storyboard" interfaces > (i.e., let them evaluate the interfaces). Revisit the user stories via > the storyboards. Refine the second part of 3, and iterate 5 and 6. > > 7. Develop one or more "prototypes" (i.e., the interface with canned > functionality). Let the user group play with it, get feedback, and > iterate. > > 8. The result should be "the" user interface. > > > One of the most important parts of this process is to identify the > desired characteristics of the target users, and to pick a > representative group of users that can lead to the widest array of > use-cases/user-stories that are most benefitial to the target users. > > For example, we have primarily focused on niche-modeling as the use > case. (This isn't a great example, but bear with me) If our sample > user group only consisted of scientists that did niche modeling, or if > this were our target user group, we would probably build a user > interface around, and specific to niche modeling (i.e., niche modeling > should become an integral, and probably embedded, part of the > interface). Of course, for us, this isn't necessarily true because we > know we have a more general target user group. But, hopefully you get > the point. > > > shawn > > >> >> Rod >> >> >> Deana Pennington wrote: >> >>> In thinking about the Kepler UI, it has occurred to me that it would >>> really be nice if the ontologies that we construct to organize the >>> actors into categories, could also be used in a high-level workflow >>> design phase. For example, in the niche modeling workflow, GARP, >>> neural networks, GRASP and many other algorithms could be used for >>> that one step in the workflow. Those algorithms would all be >>> organized under some high-level hierarchy ("StatisticalModels"). >>> Another example is the Pre-sample step, where we are using the GARP >>> pre-sample algorithm, but other sampling algorithms could be >>> substituted. There should be a high-level "Sampling" concept, under >>> which different sampling algorithms would be organized. During the >>> design phase, the user could construct a workflow based on these >>> high level concepts (Sampling and StatisticalModel), then bind an >>> actor (already implemented or using Chad's new actor) in a >>> particular view of that workflow. So, a workflow would be designed >>> at a high conceptual level, and have multiple views, binding >>> different algorithms, and those different views would be logically >>> linked through the high level workflow. The immediate case is the >>> GARP workflow we are designing will need another version for the >>> neural network algorithm, and that version will be virtually an >>> exact replicate except for that actor. Seems like it would be >>> better to have one workflow with different views... >>> >>> I hope the above is coherent...in reading it, I'm not sure that it >>> is :-) >>> >>> Deana >>> >>> >> > From rods at ku.edu Mon Jun 14 05:56:34 2004 From: rods at ku.edu (Rod Spears) Date: Mon, 14 Jun 2004 07:56:34 -0500 Subject: [seek-dev] Re: [seek-kr-sms] UI In-Reply-To: <40CA1A33.7090600@lternet.edu> References: <40C629D8.1080107@lternet.edu> <40C9BBE1.7010304@ku.edu> <40CA16F4.9060204@sdsc.edu> <40CA1A33.7090600@lternet.edu> Message-ID: <40CDA082.5050602@ku.edu> In many ways I think the current user-interface work for Kepler is almost orthoginal to this discussion. There are many issues with the current UI that need to be fixed ASAP, but I don't think it should keep us from getting a group together to start down the path that Shawn has outlined. If we (and we should) take a more process oriented approach to developing the UI this work really has little, if anything, to do with Kepler for quite sometime. As I see it the Kepler UI is really the "advanced" UI for SEEK. There is a whole lot of work that needs to go on before that. Deana has a very valid point as to how to begin this work with/without the usability position being filled. At the same time, many different aspects of the UI are being to take shape and time is of the essence. Rod Deana Pennington wrote: > Shawn & Rod, > > I think these are all great suggestions, and we've been discussimg > putting together a group of ecologists for a couple of days of > testing, but: > > 1) we thought that there are some major issues with the interface as > it stands right now that need to be fixed before we try to get a group > together, and > 2) a decision needs to made about the useability engineer position, so > that person can be involved right from the start in user testing and > UI design > > So, I think we should table this discussion until the above 2 things > are resolved. It's obvious that this needs to be addressed soon. > > Deana > > > Shawn Bowers wrote: > >> >> >> Rod Spears wrote: >> >>> (This is a general reply to the entire thread that is on seek-kr-sms): >>> >>> In the end, there are really two very simple questions about what we >>> are all doing on SEEK: >>> >>> 1) Can we make it work? >>> a) This begs the question of "how" to make it work. >>> >>> 2) Will anybody use it? >>> a) This begs the question of "can" anybody use it? >>> >>> Shawn is right when he says we are coming at this from the >>> "bottom-up." SEEK has been very focused on the mechanics of how to >>> take legacy data and modeling techniques and create a new >>> environment to "house" them and better utilize them. In the end, if >>> you can't answer question #1, it does matter whether you can answer >>> question #2. >>> >>> But at the same time I have felt that we have been a little too >>> focused on #1, or at the very least we haven't been spending enough >>> time on question #2. >>> >>> Both Nico and Fernando touched on two very important aspects of what >>> we are talking about. Nico's comment about attacking the problem >>> from "both" ends (top down and bottom up) seems very appropriate. >>> In fact, the more we know about the back-end the better we know what >>> "tools" or functionality we have to develop for the front-end and >>> how best they can interact. >>> >>> Fernando's comment touches on the core of what concerns me the most, >>> and it is the realization of question #2 >>> His comment: "/I also think that the major impediment to an >>> understanding that requires a paradigm switch is the early >>> idealization of a graphical user interface/." Or more appropriately >>> known as "the seduction of the GUI." (Soon to be a Broadway play ;-) ). >>> >>> We absolutely have to create a tool that scientists can use. So this >>> means we have to create a tool that "engages" the way they think >>> about modeling problems. Note that I used the word "engage", meaning >>> the tool doesn't to be an exact reflection of their process for >>> creating a models and doing analysis, but if has to be close enough >>> to make them want to "step up to the plate" and "take a swing for >>> the fence" as it were. >>> >>> In many ways too, Fernando's comment touch on the the problem I have >>> always had with Kepler. The UI is completely intertwined with the >>> model definition and the analysis specification. It has nearly zero >>> flexibility in how one "views" the "process" of entering in the >>> model. (As a side note, the UI is one of the harder aspects of >>> Kepler to tailor) >>> >>> In a perfect world of time and budgets it would be nice to create a >>> tool that has standalone Modeling and Analysis Definition Language, >>> then a core standalone analysis/simulation engine, and lastly a set >>> of GUI tools that assist the scientists in creating the models and >>> monitoring the execution. Notice how the GUI came last? The GUI >>> needs to be born out of the underlying technology instead of >>> defining it. >>> >>> I am a realist and I understand how much functionality Kepler brings >>> to the table, it gives us such a head start in AMS. Maybe we need to >>> start thinking about a more "conceptual" tool that fits in front of >>> Kelper, but before that we need to really understand how the average >>> scientist would approach the SEEK technology. I'll say this as a >>> joke: "but that pretty much excludes any scientist working on SEEK," >>> but it is true. Never let the folks creating the technology tell you >>> how the technology should be used, that's the responsibility of the >>> user. >>> >>> I know the word "use case" has been thrown around daily as if it >>> were confetti, but I think the time is approaching where we need to >>> really focus on developing some "real" end-user use cases. I think a >>> much bigger effort and emphasis needs to be placed on the >>> "top-down." And some of the ideas presented in this entire thread is >>> a good start. >> >> >> >> Great synthesis and points Rod. >> >> (Note that I un-cc'd kepler-dev, since this discussion is very much >> seek-specific) >> >> I agree with you, Nico, and Ferdinando that we need top-down >> development (i.e., an understanding of the targeted user problems and >> needs, and how best to address these via end-user interfaces) as well >> as bottom-up development (underlying technology, etc.). >> >> I think that in general, we are at a point in the project where we >> have a good idea of the kinds of solutions we can provide (e.g., with >> EcoGrid, Kepler, SMS, Taxon, and so on). >> >> And, we are beginning to get to the point where we are >> building/needing user interfaces: we are beginning to >> design/implement add-ons to Kepler, e.g., for EcoGrid querying and >> Ontology-enabled actor/dataset browsing; GrOWL is becoming our >> user-interface for ontologies; we are designing a user interface for >> annotating actors and datasets (for datasets, there are also UIs such >> as Morhpo); and working on taxonomic browsing. >> >> I definately think that now in the project is a great time to take a >> step back, and as these interfaces are being designed and implemented >> (as well as the lower-level technology), to be informed by real >> user-needs. >> >> >> Here is what I think needs to be done to do an effective top-down >> design: >> >> 1. Clearly identify our target user group(s) and the general benefit >> we believe SEEK will provide to these groups. In particular, who are >> we developing the "SEEK system" for, and what are their >> problems/needs and constraints. Capture this as a report. (As an >> aside, it will be very hard to evaluate the utility of SEEK without >> understanding who it is meant to help, and how it is meant to help >> them.) >> >> 2. Assemble a representive group of target users. As Rod suggests, >> there should be participants that are independent of SEEK. [I >> attended one meeting that was close to this in Abq in Aug. 2003 -- >> have there been others?] >> >> 3. Identify the needs of the representive group in terms of SEEK. >> These might be best represented as "user stories" (i.e., scenarios) >> initially as opposed to use cases. I think there are two types of >> user stories that are extremely benefitial: (1) as a scenario of how >> some process works now, e.g., the story of a scientist that needed to >> run a niche model; (2) ask the user to tell us "how you would like >> the system to work" for the stories from 1. >> >> 4. Synthesize the user stories into a set of target use cases that >> touch a wide range of functionality. Develop and refine the use cases. >> >> 5. From the use cases and user constraints, design one or more >> "storyboard" user interfaces, or the needed user interface components >> from the use cases. At this point, there may be different possible >> interfaces, e.g., a high-level ontology based interface as suggested >> by Ferdinando and a low-level Kepler-based interface. This is where >> we need to be creative to address user needs. >> >> 6. Get feedback from the target users on the "storyboard" interfaces >> (i.e., let them evaluate the interfaces). Revisit the user stories >> via the storyboards. Refine the second part of 3, and iterate 5 and 6. >> >> 7. Develop one or more "prototypes" (i.e., the interface with canned >> functionality). Let the user group play with it, get feedback, and >> iterate. >> >> 8. The result should be "the" user interface. >> >> >> One of the most important parts of this process is to identify the >> desired characteristics of the target users, and to pick a >> representative group of users that can lead to the widest array of >> use-cases/user-stories that are most benefitial to the target users. >> >> For example, we have primarily focused on niche-modeling as the use >> case. (This isn't a great example, but bear with me) If our sample >> user group only consisted of scientists that did niche modeling, or >> if this were our target user group, we would probably build a user >> interface around, and specific to niche modeling (i.e., niche >> modeling should become an integral, and probably embedded, part of >> the interface). Of course, for us, this isn't necessarily true >> because we know we have a more general target user group. But, >> hopefully you get the point. >> >> >> shawn >> >> >>> >>> Rod >>> >>> >>> Deana Pennington wrote: >>> >>>> In thinking about the Kepler UI, it has occurred to me that it >>>> would really be nice if the ontologies that we construct to >>>> organize the actors into categories, could also be used in a >>>> high-level workflow design phase. For example, in the niche >>>> modeling workflow, GARP, neural networks, GRASP and many other >>>> algorithms could be used for that one step in the workflow. Those >>>> algorithms would all be organized under some high-level hierarchy >>>> ("StatisticalModels"). Another example is the Pre-sample step, >>>> where we are using the GARP pre-sample algorithm, but other >>>> sampling algorithms could be substituted. There should be a >>>> high-level "Sampling" concept, under which different sampling >>>> algorithms would be organized. During the design phase, the user >>>> could construct a workflow based on these high level concepts >>>> (Sampling and StatisticalModel), then bind an actor (already >>>> implemented or using Chad's new actor) in a particular view of that >>>> workflow. So, a workflow would be designed at a high conceptual >>>> level, and have multiple views, binding different algorithms, and >>>> those different views would be logically linked through the high >>>> level workflow. The immediate case is the GARP workflow we are >>>> designing will need another version for the neural network >>>> algorithm, and that version will be virtually an exact replicate >>>> except for that actor. Seems like it would be better to have one >>>> workflow with different views... >>>> >>>> I hope the above is coherent...in reading it, I'm not sure that it >>>> is :-) >>>> >>>> Deana >>>> >>>> >>> >> >> _______________________________________________ >> seek-dev mailing list >> seek-dev at ecoinformatics.org >> http://www.ecoinformatics.org/mailman/listinfo/seek-dev > > > From ferdinando.villa at uvm.edu Mon Jun 14 08:25:19 2004 From: ferdinando.villa at uvm.edu (Ferdinando Villa) Date: Mon, 14 Jun 2004 11:25:19 -0400 Subject: [seek-kr-sms] Re: [kepler-dev] introduction & distributed ptolemy In-Reply-To: <16589.48492.642188.623683@multivac.sdsc.edu> References: <5.1.0.14.2.20040612072426.02628540@mho.eecs.berkeley.edu> <16589.48492.642188.623683@multivac.sdsc.edu> Message-ID: <1087226719.4452.17.camel@basil.snr.uvm.edu> On Mon, 2004-06-14 at 11:01, Bertram Ludaescher wrote: > Ideally the ports of an actor should (or at least could) > have multiple types: > - the data type (including say XML Schema type), > - the semantic type (e.g. a concept expression describing more formally > what else might be known about the data flowing through the port) > [[aside for Ferdinando: our "reductionist/separatist approach" does not > preclude forever an integrated modeling solution - it's just bottom up > to get sth useful soon/in finite time ;-]] Hi Bertram! one point - if you read "reductionistic" (which I probably wrote somewhere) as "reductive" you're misinterpreting me - "we" ecologists mostly see reduct. vs. holistic as complementary (with hierarchical thinking as a possible integrating framework) so when we say reductionistic, we mean exactly what you also mean... one GOOD way to look at the problem, usually the most practical, easier to study, while often the least conducive to synthetic understanding... but NOT separatist!!!! Philosophy aside, here's a more SEEK-specific provocation: don't you think that the "conceptual/holistic/top-down" approach is the one that needs the semantic types, while the "workflow/reductionist/ptolemy" approach would be just fine with just the "storage/machine" types of whatever virtual machine AMS will be? Also: where do the "transformations" belong (scaling, units)? I'd argue they belong "mechanically" to the reductionistic side - just like all other actors, created to calculate a concept - and if the user don't need to see them, it's not because we hide them up in the conceptual description, but because they're actors in the workflow, and the conceptual description that users work with is not the workflow. Maybe we're mixing the sides up somewhat, and if so, is this ok... or is it going to postpone the beautiful "moment of clarity" when we all realize that we've all been thinking the same thing all along? Cheers, ferdinand > - the event consumption/production type (useful for scheduling a la > SDF) > - the communication type (through the Ptolemy/Kepler client, directly > via say FTP or HTTP) etc > > At some levels of modeling ones does explicitely hide such detail from > the modeler/user but at other levels this might be a good way of > overcoming some scalability issues (if you have terabyte data streams > you want them to go directly where they need to) > > A related problem of web servies (as actors) is that they send results > back to the caller (Kepler) and don't forward them to the subsequent > actor making large data transfers virtually impossible.. > > A simple extension to the web service model (anyone knows whether > that's already done???) would allow for data to include *references* > so that a process would be able return to Kepler just a reference to > the result data and that reference would be passed on to the consuming > actor who then understands how to derefernce it. This simple > extension seems to be an easy solution to what we called before the > 3rd party transfer problem: > > -->[Actor A] ---> [ Actor B] --> ... > > To stream large data set D from A to B w/o going through > Ptolemy/Kepler one can simply send instead a handle &D and then B, > upon receiving &D, understands and dereferenes it by calling the > appropriate protocol (FTP/gridFTP, HTTP, SRB,...) > > Note that there are already explicit Kepler actors (SRBread/SRBwrite, > gridFTP) for large data transfer. More elegant would it be to just > send handles in the form, e.g., dereference(http://.....) > Note that the special tag 'derefence' is needed since not every URL > should be dereferenced (a URL can be perfectly valid data all by > itself) > > It would be good if we could (a) define our extensions in line with > web services extensions that deal with dereferencing message parts (if > such exists) and (b) can work on a joint > Kepler/Ptolemy/Roadnet/SEEK/SDM etc approach (in fact, Kepler is such > a joint forum for co-designing this together..) > > Bertram > > PS Tobin: I recently met Kent and heard good news about ORB access in > Kepler already. You can also check with Efrat at SDSC on 3rd party > transfer issues while you're at SDSC.. > > >>>>> "EAL" == Edward A Lee writes: > EAL> > EAL> At 05:48 PM 6/11/2004 -0700, Tobin Fricke wrote: > >> A basic question I have is, is there a defined network transport for > >> Ptolemy relations? I expect that this question isn't really well-formed > >> as I still have some reading to do on how relations actually work. > >> Nonetheless, there is the question of, if we have different instances of > >> Ptolemy talking to each other across the network, how are the data streams > >> transmitted? In our case one option is to use the ORB as the stream > >> transport, equipping each sub-model with ORB source and ORB sink > >> components; and perhaps this could be done implicitly to automatically > >> distribute a model across the network. But this line of thinking is > >> strongly tied to the idea of data streams and may not be appropriate for > >> the more general notion of relations in Ptolemy. > EAL> > EAL> We have done quite a bit of experimentation with distributed > EAL> Ptolemy II models, but haven't completely settled on any one > EAL> approach... Most of the recent work in this area has been > EAL> done by Yang Zhao, whom I've cc'd for additional comments... > EAL> Here are some notes: > EAL> > EAL> - A model can contain a component that is defined elsewhere > EAL> on the network, referenced at a URL. There is a demo > EAL> in the quick tour that runs a submodel that sits on our > EAL> web server. > EAL> > EAL> - The Corba library provides a mechanism for transporting > EAL> tokens from one model to another using either push or > EAL> pull style interactions. The software is in the > EAL> ptolemy.actor.corba package, but there are currently > EAL> no good (easily run) demos, and documentation is sparse. > EAL> > EAL> - The MobileModel actor accepts a model definition on an > EAL> input port and then executes that model. Yang has used > EAL> this with the Corba actors to build models where one > EAL> model constructs another model and sends it to another > EAL> machine on the network to execute. > EAL> > EAL> - The JXTA library (ptolemy.actor.lib.jxta) uses Sun's > EAL> XML-based P2P mechanism. Yang has used this to construct > EAL> a distributed chat room application. > EAL> > EAL> - The ptolemy.actor.lib.net has two actors DatagramReader > EAL> and DatagramWriter that provide low-level mechanisms for > EAL> models to communicate over the net. Three or four years > EAL> ago Win Williams used this to created a distributed model > EAL> where two computers on the net were connected to > EAL> motor controllers and users could "arm wrestle" over > EAL> the network ... when one of the users turned his motor, > EAL> the other motor would turn, and they could fight each > EAL> other, trying to turn the motors in opposite directions. > EAL> > EAL> - Some years ago we also did some experimentation with > EAL> Sun's JINI P2P mechanism, but this has been largely > EAL> supplanted by JXTA. > EAL> > EAL> - The security library (ptolemy.actor.lib.security) > EAL> provides encryption and decryption and authentication > EAL> based on digital signatures. > EAL> > EAL> Most of these mechanisms have not been well packaged, > EAL> and we haven't worked out the "lifecycle management" issues > EAL> (how to start up a distributed model systematically, how > EAL> to manage network failures). > EAL> > EAL> In my view, working out these issues is a top priority... > EAL> I would be delighted to work with you or anyone else on this... > EAL> > EAL> Edward > EAL> > EAL> > EAL> > EAL> > EAL> > EAL> ------------ > EAL> Edward A. Lee, Professor > EAL> 518 Cory Hall, UC Berkeley, Berkeley, CA 94720 > EAL> phone: 510-642-0455, fax: 510-642-2739 > EAL> eal at eecs.Berkeley.EDU, http://ptolemy.eecs.berkeley.edu/~eal > EAL> > EAL> _______________________________________________ > EAL> kepler-dev mailing list > EAL> kepler-dev at ecoinformatics.org > EAL> http://www.ecoinformatics.org/mailman/listinfo/kepler-dev > _______________________________________________ > kepler-dev mailing list > kepler-dev at ecoinformatics.org > http://www.ecoinformatics.org/mailman/listinfo/kepler-dev -- From berkley at nceas.ucsb.edu Mon Jun 14 08:56:55 2004 From: berkley at nceas.ucsb.edu (Chad Berkley) Date: Mon, 14 Jun 2004 08:56:55 -0700 Subject: [seek-kr-sms] list addressing Message-ID: <40CDCAC7.8090308@nceas.ucsb.edu> Hi All, My appologies if you get this more than once. I would like to make a request of those using the ecoinformatics listserv. Please do not BCC any recipients. This forces your list messages to moderation, which Matt or I must approve before being shipped out to the list. If you are consistently getting a message back from the listserv telling you that your message has been moderated, please figure out why (usually you are either not subscribed to that list with the email address you used to send the message, or you have BCC'd someone) and fix it. I think I've had to moderate almost every single message on the current UI/ontology thread because a BCC was included. Making sure your messages aren't getting moderated will help Matt and I out a lot. Thanks, chad From ferdinando.villa at uvm.edu Mon Jun 14 08:51:23 2004 From: ferdinando.villa at uvm.edu (Ferdinando Villa) Date: Mon, 14 Jun 2004 11:51:23 -0400 Subject: [seek-dev] Re: [seek-kr-sms] UI In-Reply-To: <40CDA082.5050602@ku.edu> References: <40C629D8.1080107@lternet.edu> <40C9BBE1.7010304@ku.edu> <40CA16F4.9060204@sdsc.edu> <40CA1A33.7090600@lternet.edu> <40CDA082.5050602@ku.edu> Message-ID: <1087228283.4452.32.camel@basil.snr.uvm.edu> One way I would frame this discussion, thinking about the comment about "visual modeling and analysis language" and the whole UI issue, is that we need to start a synthesis (top-down) effort aimed at understanding what's the language that shapes an ecologist's thinking when they approach a problem, and characterize its relationship with the two conceptual frameworks we've been concentrating on so far: the KR framework and the workflow framework (in their abstract nature, before going down to OWL and Ptolemy, and WITHOUT one thought to any pretty screenshot!). The exercise should highlight whether we need to (a) have enough of one - maybe slightly extended - and infer the other, (b) find something that sits in the middle, or (c) find something totally different. This done, we should be able to easily define the visual language that most closely embodies it. Back to personal opinions, I'll just add that it's my belief that this process, although it needs very open minds, doesn't necessarily have to be very long and very hard, and I think we have all the pieces in place to quickly prototype the right UI (as opposed to the "advanced" one!) when the idea is clear, without having to distance ourselves much from things as they stand now... ferd On Mon, 2004-06-14 at 08:56, Rod Spears wrote: > In many ways I think the current user-interface work for Kepler is > almost orthoginal to this discussion. > > There are many issues with the current UI that need to be fixed ASAP, > but I don't think it should keep us from getting a group together to > start down the path that Shawn has outlined. > > If we (and we should) take a more process oriented approach to > developing the UI this work really has little, if anything, to do with > Kepler for quite sometime. > > As I see it the Kepler UI is really the "advanced" UI for SEEK. There is > a whole lot of work that needs to go on before that. > > Deana has a very valid point as to how to begin this work with/without > the usability position being filled. At the same time, many different > aspects of the UI are being to take shape and time is of the essence. > > Rod > > > Deana Pennington wrote: > > > Shawn & Rod, > > > > I think these are all great suggestions, and we've been discussimg > > putting together a group of ecologists for a couple of days of > > testing, but: > > > > 1) we thought that there are some major issues with the interface as > > it stands right now that need to be fixed before we try to get a group > > together, and > > 2) a decision needs to made about the useability engineer position, so > > that person can be involved right from the start in user testing and > > UI design > > > > So, I think we should table this discussion until the above 2 things > > are resolved. It's obvious that this needs to be addressed soon. > > > > Deana > > > > > > Shawn Bowers wrote: > > > >> > >> > >> Rod Spears wrote: > >> > >>> (This is a general reply to the entire thread that is on seek-kr-sms): > >>> > >>> In the end, there are really two very simple questions about what we > >>> are all doing on SEEK: > >>> > >>> 1) Can we make it work? > >>> a) This begs the question of "how" to make it work. > >>> > >>> 2) Will anybody use it? > >>> a) This begs the question of "can" anybody use it? > >>> > >>> Shawn is right when he says we are coming at this from the > >>> "bottom-up." SEEK has been very focused on the mechanics of how to > >>> take legacy data and modeling techniques and create a new > >>> environment to "house" them and better utilize them. In the end, if > >>> you can't answer question #1, it does matter whether you can answer > >>> question #2. > >>> > >>> But at the same time I have felt that we have been a little too > >>> focused on #1, or at the very least we haven't been spending enough > >>> time on question #2. > >>> > >>> Both Nico and Fernando touched on two very important aspects of what > >>> we are talking about. Nico's comment about attacking the problem > >>> from "both" ends (top down and bottom up) seems very appropriate. > >>> In fact, the more we know about the back-end the better we know what > >>> "tools" or functionality we have to develop for the front-end and > >>> how best they can interact. > >>> > >>> Fernando's comment touches on the core of what concerns me the most, > >>> and it is the realization of question #2 > >>> His comment: "/I also think that the major impediment to an > >>> understanding that requires a paradigm switch is the early > >>> idealization of a graphical user interface/." Or more appropriately > >>> known as "the seduction of the GUI." (Soon to be a Broadway play ;-) ). > >>> > >>> We absolutely have to create a tool that scientists can use. So this > >>> means we have to create a tool that "engages" the way they think > >>> about modeling problems. Note that I used the word "engage", meaning > >>> the tool doesn't to be an exact reflection of their process for > >>> creating a models and doing analysis, but if has to be close enough > >>> to make them want to "step up to the plate" and "take a swing for > >>> the fence" as it were. > >>> > >>> In many ways too, Fernando's comment touch on the the problem I have > >>> always had with Kepler. The UI is completely intertwined with the > >>> model definition and the analysis specification. It has nearly zero > >>> flexibility in how one "views" the "process" of entering in the > >>> model. (As a side note, the UI is one of the harder aspects of > >>> Kepler to tailor) > >>> > >>> In a perfect world of time and budgets it would be nice to create a > >>> tool that has standalone Modeling and Analysis Definition Language, > >>> then a core standalone analysis/simulation engine, and lastly a set > >>> of GUI tools that assist the scientists in creating the models and > >>> monitoring the execution. Notice how the GUI came last? The GUI > >>> needs to be born out of the underlying technology instead of > >>> defining it. > >>> > >>> I am a realist and I understand how much functionality Kepler brings > >>> to the table, it gives us such a head start in AMS. Maybe we need to > >>> start thinking about a more "conceptual" tool that fits in front of > >>> Kelper, but before that we need to really understand how the average > >>> scientist would approach the SEEK technology. I'll say this as a > >>> joke: "but that pretty much excludes any scientist working on SEEK," > >>> but it is true. Never let the folks creating the technology tell you > >>> how the technology should be used, that's the responsibility of the > >>> user. > >>> > >>> I know the word "use case" has been thrown around daily as if it > >>> were confetti, but I think the time is approaching where we need to > >>> really focus on developing some "real" end-user use cases. I think a > >>> much bigger effort and emphasis needs to be placed on the > >>> "top-down." And some of the ideas presented in this entire thread is > >>> a good start. > >> > >> > >> > >> Great synthesis and points Rod. > >> > >> (Note that I un-cc'd kepler-dev, since this discussion is very much > >> seek-specific) > >> > >> I agree with you, Nico, and Ferdinando that we need top-down > >> development (i.e., an understanding of the targeted user problems and > >> needs, and how best to address these via end-user interfaces) as well > >> as bottom-up development (underlying technology, etc.). > >> > >> I think that in general, we are at a point in the project where we > >> have a good idea of the kinds of solutions we can provide (e.g., with > >> EcoGrid, Kepler, SMS, Taxon, and so on). > >> > >> And, we are beginning to get to the point where we are > >> building/needing user interfaces: we are beginning to > >> design/implement add-ons to Kepler, e.g., for EcoGrid querying and > >> Ontology-enabled actor/dataset browsing; GrOWL is becoming our > >> user-interface for ontologies; we are designing a user interface for > >> annotating actors and datasets (for datasets, there are also UIs such > >> as Morhpo); and working on taxonomic browsing. > >> > >> I definately think that now in the project is a great time to take a > >> step back, and as these interfaces are being designed and implemented > >> (as well as the lower-level technology), to be informed by real > >> user-needs. > >> > >> > >> Here is what I think needs to be done to do an effective top-down > >> design: > >> > >> 1. Clearly identify our target user group(s) and the general benefit > >> we believe SEEK will provide to these groups. In particular, who are > >> we developing the "SEEK system" for, and what are their > >> problems/needs and constraints. Capture this as a report. (As an > >> aside, it will be very hard to evaluate the utility of SEEK without > >> understanding who it is meant to help, and how it is meant to help > >> them.) > >> > >> 2. Assemble a representive group of target users. As Rod suggests, > >> there should be participants that are independent of SEEK. [I > >> attended one meeting that was close to this in Abq in Aug. 2003 -- > >> have there been others?] > >> > >> 3. Identify the needs of the representive group in terms of SEEK. > >> These might be best represented as "user stories" (i.e., scenarios) > >> initially as opposed to use cases. I think there are two types of > >> user stories that are extremely benefitial: (1) as a scenario of how > >> some process works now, e.g., the story of a scientist that needed to > >> run a niche model; (2) ask the user to tell us "how you would like > >> the system to work" for the stories from 1. > >> > >> 4. Synthesize the user stories into a set of target use cases that > >> touch a wide range of functionality. Develop and refine the use cases. > >> > >> 5. From the use cases and user constraints, design one or more > >> "storyboard" user interfaces, or the needed user interface components > >> from the use cases. At this point, there may be different possible > >> interfaces, e.g., a high-level ontology based interface as suggested > >> by Ferdinando and a low-level Kepler-based interface. This is where > >> we need to be creative to address user needs. > >> > >> 6. Get feedback from the target users on the "storyboard" interfaces > >> (i.e., let them evaluate the interfaces). Revisit the user stories > >> via the storyboards. Refine the second part of 3, and iterate 5 and 6. > >> > >> 7. Develop one or more "prototypes" (i.e., the interface with canned > >> functionality). Let the user group play with it, get feedback, and > >> iterate. > >> > >> 8. The result should be "the" user interface. > >> > >> > >> One of the most important parts of this process is to identify the > >> desired characteristics of the target users, and to pick a > >> representative group of users that can lead to the widest array of > >> use-cases/user-stories that are most benefitial to the target users. > >> > >> For example, we have primarily focused on niche-modeling as the use > >> case. (This isn't a great example, but bear with me) If our sample > >> user group only consisted of scientists that did niche modeling, or > >> if this were our target user group, we would probably build a user > >> interface around, and specific to niche modeling (i.e., niche > >> modeling should become an integral, and probably embedded, part of > >> the interface). Of course, for us, this isn't necessarily true > >> because we know we have a more general target user group. But, > >> hopefully you get the point. > >> > >> > >> shawn > >> > >> > >>> > >>> Rod > >>> > >>> > >>> Deana Pennington wrote: > >>> > >>>> In thinking about the Kepler UI, it has occurred to me that it > >>>> would really be nice if the ontologies that we construct to > >>>> organize the actors into categories, could also be used in a > >>>> high-level workflow design phase. For example, in the niche > >>>> modeling workflow, GARP, neural networks, GRASP and many other > >>>> algorithms could be used for that one step in the workflow. Those > >>>> algorithms would all be organized under some high-level hierarchy > >>>> ("StatisticalModels"). Another example is the Pre-sample step, > >>>> where we are using the GARP pre-sample algorithm, but other > >>>> sampling algorithms could be substituted. There should be a > >>>> high-level "Sampling" concept, under which different sampling > >>>> algorithms would be organized. During the design phase, the user > >>>> could construct a workflow based on these high level concepts > >>>> (Sampling and StatisticalModel), then bind an actor (already > >>>> implemented or using Chad's new actor) in a particular view of that > >>>> workflow. So, a workflow would be designed at a high conceptual > >>>> level, and have multiple views, binding different algorithms, and > >>>> those different views would be logically linked through the high > >>>> level workflow. The immediate case is the GARP workflow we are > >>>> designing will need another version for the neural network > >>>> algorithm, and that version will be virtually an exact replicate > >>>> except for that actor. Seems like it would be better to have one > >>>> workflow with different views... > >>>> > >>>> I hope the above is coherent...in reading it, I'm not sure that it > >>>> is :-) > >>>> > >>>> Deana > >>>> > >>>> > >>> > >> > >> _______________________________________________ > >> seek-dev mailing list > >> seek-dev at ecoinformatics.org > >> http://www.ecoinformatics.org/mailman/listinfo/seek-dev > > > > > > > _______________________________________________ > seek-kr-sms mailing list > seek-kr-sms at ecoinformatics.org > http://www.ecoinformatics.org/mailman/listinfo/seek-kr-sms -- From rods at ku.edu Mon Jun 14 11:02:59 2004 From: rods at ku.edu (Rod Spears) Date: Mon, 14 Jun 2004 13:02:59 -0500 Subject: [seek-dev] Re: [seek-kr-sms] UI In-Reply-To: <1087228283.4452.32.camel@basil.snr.uvm.edu> References: <40C629D8.1080107@lternet.edu> <40C9BBE1.7010304@ku.edu> <40CA16F4.9060204@sdsc.edu> <40CA1A33.7090600@lternet.edu> <40CDA082.5050602@ku.edu> <1087228283.4452.32.camel@basil.snr.uvm.edu> Message-ID: <40CDE853.7080909@ku.edu> I agree with Ferdinando and the entire problem can be boiled down to his quote "/we need to start a synthesis (top-down) effort aimed at understanding what's the language that shapes an ecologist's thinking when they approach a problem/" Of course, I think the word "language" is both literal and figurative. I disagree with the notion that Kepler is a "/visual modeling and analysis language/." Or if it is that, it is at too low level and the moment entirely too difficult for the non-SEEK scientists to use. The solution isn't "just fix Kepler's UI." Kepler has an important role to play in the project, it is a very powerful tool as we all know. The point is, let's not /force /it to play role it isn't necessarily meant to play. Rod Ferdinando Villa wrote: >One way I would frame this discussion, thinking about the comment about >"visual modeling and analysis language" and the whole UI issue, is that >we need to start a synthesis (top-down) effort aimed at understanding >what's the language that shapes an ecologist's thinking when they >approach a problem, and characterize its relationship with the two >conceptual frameworks we've been concentrating on so far: the KR >framework and the workflow framework (in their abstract nature, before >going down to OWL and Ptolemy, and WITHOUT one thought to any pretty >screenshot!). The exercise should highlight whether we need to (a) have >enough of one - maybe slightly extended - and infer the other, (b) find >something that sits in the middle, or (c) find something totally >different. This done, we should be able to easily define the visual >language that most closely embodies it. > >Back to personal opinions, I'll just add that it's my belief that this >process, although it needs very open minds, doesn't necessarily have to >be very long and very hard, and I think we have all the pieces in place >to quickly prototype the right UI (as opposed to the "advanced" one!) >when the idea is clear, without having to distance ourselves much from >things as they stand now... > >ferd > >On Mon, 2004-06-14 at 08:56, Rod Spears wrote: > > >>In many ways I think the current user-interface work for Kepler is >>almost orthoginal to this discussion. >> >>There are many issues with the current UI that need to be fixed ASAP, >>but I don't think it should keep us from getting a group together to >>start down the path that Shawn has outlined. >> >>If we (and we should) take a more process oriented approach to >>developing the UI this work really has little, if anything, to do with >>Kepler for quite sometime. >> >>As I see it the Kepler UI is really the "advanced" UI for SEEK. There is >>a whole lot of work that needs to go on before that. >> >>Deana has a very valid point as to how to begin this work with/without >>the usability position being filled. At the same time, many different >>aspects of the UI are being to take shape and time is of the essence. >> >>Rod >> >> >>Deana Pennington wrote: >> >> >> >>>Shawn & Rod, >>> >>>I think these are all great suggestions, and we've been discussimg >>>putting together a group of ecologists for a couple of days of >>>testing, but: >>> >>>1) we thought that there are some major issues with the interface as >>>it stands right now that need to be fixed before we try to get a group >>>together, and >>>2) a decision needs to made about the useability engineer position, so >>>that person can be involved right from the start in user testing and >>>UI design >>> >>>So, I think we should table this discussion until the above 2 things >>>are resolved. It's obvious that this needs to be addressed soon. >>> >>>Deana >>> >>> >>>Shawn Bowers wrote: >>> >>> >>> >>>>Rod Spears wrote: >>>> >>>> >>>> >>>>>(This is a general reply to the entire thread that is on seek-kr-sms): >>>>> >>>>>In the end, there are really two very simple questions about what we >>>>>are all doing on SEEK: >>>>> >>>>>1) Can we make it work? >>>>> a) This begs the question of "how" to make it work. >>>>> >>>>>2) Will anybody use it? >>>>> a) This begs the question of "can" anybody use it? >>>>> >>>>>Shawn is right when he says we are coming at this from the >>>>>"bottom-up." SEEK has been very focused on the mechanics of how to >>>>>take legacy data and modeling techniques and create a new >>>>>environment to "house" them and better utilize them. In the end, if >>>>>you can't answer question #1, it does matter whether you can answer >>>>>question #2. >>>>> >>>>>But at the same time I have felt that we have been a little too >>>>>focused on #1, or at the very least we haven't been spending enough >>>>>time on question #2. >>>>> >>>>>Both Nico and Fernando touched on two very important aspects of what >>>>>we are talking about. Nico's comment about attacking the problem >>>>>from "both" ends (top down and bottom up) seems very appropriate. >>>>>In fact, the more we know about the back-end the better we know what >>>>>"tools" or functionality we have to develop for the front-end and >>>>>how best they can interact. >>>>> >>>>>Fernando's comment touches on the core of what concerns me the most, >>>>>and it is the realization of question #2 >>>>>His comment: "/I also think that the major impediment to an >>>>>understanding that requires a paradigm switch is the early >>>>>idealization of a graphical user interface/." Or more appropriately >>>>>known as "the seduction of the GUI." (Soon to be a Broadway play ;-) ). >>>>> >>>>>We absolutely have to create a tool that scientists can use. So this >>>>>means we have to create a tool that "engages" the way they think >>>>>about modeling problems. Note that I used the word "engage", meaning >>>>>the tool doesn't to be an exact reflection of their process for >>>>>creating a models and doing analysis, but if has to be close enough >>>>>to make them want to "step up to the plate" and "take a swing for >>>>>the fence" as it were. >>>>> >>>>>In many ways too, Fernando's comment touch on the the problem I have >>>>>always had with Kepler. The UI is completely intertwined with the >>>>>model definition and the analysis specification. It has nearly zero >>>>>flexibility in how one "views" the "process" of entering in the >>>>>model. (As a side note, the UI is one of the harder aspects of >>>>>Kepler to tailor) >>>>> >>>>>In a perfect world of time and budgets it would be nice to create a >>>>>tool that has standalone Modeling and Analysis Definition Language, >>>>>then a core standalone analysis/simulation engine, and lastly a set >>>>>of GUI tools that assist the scientists in creating the models and >>>>>monitoring the execution. Notice how the GUI came last? The GUI >>>>>needs to be born out of the underlying technology instead of >>>>>defining it. >>>>> >>>>>I am a realist and I understand how much functionality Kepler brings >>>>>to the table, it gives us such a head start in AMS. Maybe we need to >>>>>start thinking about a more "conceptual" tool that fits in front of >>>>>Kelper, but before that we need to really understand how the average >>>>>scientist would approach the SEEK technology. I'll say this as a >>>>>joke: "but that pretty much excludes any scientist working on SEEK," >>>>>but it is true. Never let the folks creating the technology tell you >>>>>how the technology should be used, that's the responsibility of the >>>>>user. >>>>> >>>>>I know the word "use case" has been thrown around daily as if it >>>>>were confetti, but I think the time is approaching where we need to >>>>>really focus on developing some "real" end-user use cases. I think a >>>>>much bigger effort and emphasis needs to be placed on the >>>>>"top-down." And some of the ideas presented in this entire thread is >>>>>a good start. >>>>> >>>>> >>>>Great synthesis and points Rod. >>>> >>>>(Note that I un-cc'd kepler-dev, since this discussion is very much >>>>seek-specific) >>>> >>>>I agree with you, Nico, and Ferdinando that we need top-down >>>>development (i.e., an understanding of the targeted user problems and >>>>needs, and how best to address these via end-user interfaces) as well >>>>as bottom-up development (underlying technology, etc.). >>>> >>>>I think that in general, we are at a point in the project where we >>>>have a good idea of the kinds of solutions we can provide (e.g., with >>>>EcoGrid, Kepler, SMS, Taxon, and so on). >>>> >>>>And, we are beginning to get to the point where we are >>>>building/needing user interfaces: we are beginning to >>>>design/implement add-ons to Kepler, e.g., for EcoGrid querying and >>>>Ontology-enabled actor/dataset browsing; GrOWL is becoming our >>>>user-interface for ontologies; we are designing a user interface for >>>>annotating actors and datasets (for datasets, there are also UIs such >>>>as Morhpo); and working on taxonomic browsing. >>>> >>>>I definately think that now in the project is a great time to take a >>>>step back, and as these interfaces are being designed and implemented >>>>(as well as the lower-level technology), to be informed by real >>>>user-needs. >>>> >>>> >>>>Here is what I think needs to be done to do an effective top-down >>>>design: >>>> >>>>1. Clearly identify our target user group(s) and the general benefit >>>>we believe SEEK will provide to these groups. In particular, who are >>>>we developing the "SEEK system" for, and what are their >>>>problems/needs and constraints. Capture this as a report. (As an >>>>aside, it will be very hard to evaluate the utility of SEEK without >>>>understanding who it is meant to help, and how it is meant to help >>>>them.) >>>> >>>>2. Assemble a representive group of target users. As Rod suggests, >>>>there should be participants that are independent of SEEK. [I >>>>attended one meeting that was close to this in Abq in Aug. 2003 -- >>>>have there been others?] >>>> >>>>3. Identify the needs of the representive group in terms of SEEK. >>>>These might be best represented as "user stories" (i.e., scenarios) >>>>initially as opposed to use cases. I think there are two types of >>>>user stories that are extremely benefitial: (1) as a scenario of how >>>>some process works now, e.g., the story of a scientist that needed to >>>>run a niche model; (2) ask the user to tell us "how you would like >>>>the system to work" for the stories from 1. >>>> >>>>4. Synthesize the user stories into a set of target use cases that >>>>touch a wide range of functionality. Develop and refine the use cases. >>>> >>>>5. From the use cases and user constraints, design one or more >>>>"storyboard" user interfaces, or the needed user interface components >>>>from the use cases. At this point, there may be different possible >>>>interfaces, e.g., a high-level ontology based interface as suggested >>>>by Ferdinando and a low-level Kepler-based interface. This is where >>>>we need to be creative to address user needs. >>>> >>>>6. Get feedback from the target users on the "storyboard" interfaces >>>>(i.e., let them evaluate the interfaces). Revisit the user stories >>>>via the storyboards. Refine the second part of 3, and iterate 5 and 6. >>>> >>>>7. Develop one or more "prototypes" (i.e., the interface with canned >>>>functionality). Let the user group play with it, get feedback, and >>>>iterate. >>>> >>>>8. The result should be "the" user interface. >>>> >>>> >>>>One of the most important parts of this process is to identify the >>>>desired characteristics of the target users, and to pick a >>>>representative group of users that can lead to the widest array of >>>>use-cases/user-stories that are most benefitial to the target users. >>>> >>>>For example, we have primarily focused on niche-modeling as the use >>>>case. (This isn't a great example, but bear with me) If our sample >>>>user group only consisted of scientists that did niche modeling, or >>>>if this were our target user group, we would probably build a user >>>>interface around, and specific to niche modeling (i.e., niche >>>>modeling should become an integral, and probably embedded, part of >>>>the interface). Of course, for us, this isn't necessarily true >>>>because we know we have a more general target user group. But, >>>>hopefully you get the point. >>>> >>>> >>>>shawn >>>> >>>> >>>> >>>> >>>>>Rod >>>>> >>>>> >>>>>Deana Pennington wrote: >>>>> >>>>> >>>>> >>>>>>In thinking about the Kepler UI, it has occurred to me that it >>>>>>would really be nice if the ontologies that we construct to >>>>>>organize the actors into categories, could also be used in a >>>>>>high-level workflow design phase. For example, in the niche >>>>>>modeling workflow, GARP, neural networks, GRASP and many other >>>>>>algorithms could be used for that one step in the workflow. Those >>>>>>algorithms would all be organized under some high-level hierarchy >>>>>>("StatisticalModels"). Another example is the Pre-sample step, >>>>>>where we are using the GARP pre-sample algorithm, but other >>>>>>sampling algorithms could be substituted. There should be a >>>>>>high-level "Sampling" concept, under which different sampling >>>>>>algorithms would be organized. During the design phase, the user >>>>>>could construct a workflow based on these high level concepts >>>>>>(Sampling and StatisticalModel), then bind an actor (already >>>>>>implemented or using Chad's new actor) in a particular view of that >>>>>>workflow. So, a workflow would be designed at a high conceptual >>>>>>level, and have multiple views, binding different algorithms, and >>>>>>those different views would be logically linked through the high >>>>>>level workflow. The immediate case is the GARP workflow we are >>>>>>designing will need another version for the neural network >>>>>>algorithm, and that version will be virtually an exact replicate >>>>>>except for that actor. Seems like it would be better to have one >>>>>>workflow with different views... >>>>>> >>>>>>I hope the above is coherent...in reading it, I'm not sure that it >>>>>>is :-) >>>>>> >>>>>>Deana >>>>>> >>>>>> >>>>>> >>>>>> >>>>_______________________________________________ >>>>seek-dev mailing list >>>>seek-dev at ecoinformatics.org >>>>http://www.ecoinformatics.org/mailman/listinfo/seek-dev >>>> >>>> >>> >>> >>_______________________________________________ >>seek-kr-sms mailing list >>seek-kr-sms at ecoinformatics.org >>http://www.ecoinformatics.org/mailman/listinfo/seek-kr-sms >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mercury.nceas.ucsb.edu/ecoinformatics/pipermail/seek-kr-sms/attachments/20040614/0fab7aba/attachment.htm From dpennington at lternet.edu Mon Jun 14 11:56:38 2004 From: dpennington at lternet.edu (Deana Pennington) Date: Mon, 14 Jun 2004 12:56:38 -0600 Subject: [seek-dev] Re: [seek-kr-sms] UI In-Reply-To: <1087228283.4452.32.camel@basil.snr.uvm.edu> References: <40C629D8.1080107@lternet.edu> <40C9BBE1.7010304@ku.edu> <40CA16F4.9060204@sdsc.edu> <40CA1A33.7090600@lternet.edu> <40CDA082.5050602@ku.edu> <1087228283.4452.32.camel@basil.snr.uvm.edu> Message-ID: <40CDF4E6.7070703@lternet.edu> One way to look at this is to think about short-term (next year) vs intermediate term (next 3 years of SEEK) vs long-term (decade+) visions. I am completely sold on Ferdinando's approach for the long-term. I don't think that it's something we can fully implement within the SEEK time frame. But I do think that we can design some short/intermediate-term objectives that will build on the things we are already doing in Kepler and be consistent with the long view. Can we (the "open-minded, interested SEEK participants of Ferdinando's opinion below) come up with some very short term objectives that include both approaches? Something implementable within the next few months? If so, then I would be very interested in getting a user group together to test 1) kepler alone, 2) IMA alone, and 3) a prototype combined approach. Although I understand the desire to fully flesh out the intended user's thinking and needs by soliciting feedback from an extensive group of non-SEEK users, I'm pretty sure that any survey/test that we tried to conduct right now, with nothing but the current Kepler app to work with, will be severely hampered by their ability to envision anything else. So, does anyone want to propose something that would be implementable in the short term, that merges top-down/bottom-up approaches as a prototype? I have already suggested that we take the GARP pipeline and implement it in IMA. In my original posting to this list, I suggested that we take the limited number of actors that we are using, construct a partial ontology that represents them and link it with whatever other ontologies we are constructing, and design an interface that would allow the user to work with those concepts rather than directly with the kepler actors, which could be selected and bound to the concept in the final design step. It seems (to me) that this (or some other simplistic merging of approaches) should not be that hard to do. Ideas? Deana Ferdinando Villa wrote: >One way I would frame this discussion, thinking about the comment about >"visual modeling and analysis language" and the whole UI issue, is that >we need to start a synthesis (top-down) effort aimed at understanding >what's the language that shapes an ecologist's thinking when they >approach a problem, and characterize its relationship with the two >conceptual frameworks we've been concentrating on so far: the KR >framework and the workflow framework (in their abstract nature, before >going down to OWL and Ptolemy, and WITHOUT one thought to any pretty >screenshot!). The exercise should highlight whether we need to (a) have >enough of one - maybe slightly extended - and infer the other, (b) find >something that sits in the middle, or (c) find something totally >different. This done, we should be able to easily define the visual >language that most closely embodies it. > >Back to personal opinions, I'll just add that it's my belief that this >process, although it needs very open minds, doesn't necessarily have to >be very long and very hard, and I think we have all the pieces in place >to quickly prototype the right UI (as opposed to the "advanced" one!) >when the idea is clear, without having to distance ourselves much from >things as they stand now... > >ferd > >On Mon, 2004-06-14 at 08:56, Rod Spears wrote: > > >>In many ways I think the current user-interface work for Kepler is >>almost orthoginal to this discussion. >> >>There are many issues with the current UI that need to be fixed ASAP, >>but I don't think it should keep us from getting a group together to >>start down the path that Shawn has outlined. >> >>If we (and we should) take a more process oriented approach to >>developing the UI this work really has little, if anything, to do with >>Kepler for quite sometime. >> >>As I see it the Kepler UI is really the "advanced" UI for SEEK. There is >>a whole lot of work that needs to go on before that. >> >>Deana has a very valid point as to how to begin this work with/without >>the usability position being filled. At the same time, many different >>aspects of the UI are being to take shape and time is of the essence. >> >>Rod >> >> >>Deana Pennington wrote: >> >> >> >>>Shawn & Rod, >>> >>>I think these are all great suggestions, and we've been discussimg >>>putting together a group of ecologists for a couple of days of >>>testing, but: >>> >>>1) we thought that there are some major issues with the interface as >>>it stands right now that need to be fixed before we try to get a group >>>together, and >>>2) a decision needs to made about the useability engineer position, so >>>that person can be involved right from the start in user testing and >>>UI design >>> >>>So, I think we should table this discussion until the above 2 things >>>are resolved. It's obvious that this needs to be addressed soon. >>> >>>Deana >>> >>> >>>Shawn Bowers wrote: >>> >>> >>> >>>>Rod Spears wrote: >>>> >>>> >>>> >>>>>(This is a general reply to the entire thread that is on seek-kr-sms): >>>>> >>>>>In the end, there are really two very simple questions about what we >>>>>are all doing on SEEK: >>>>> >>>>>1) Can we make it work? >>>>> a) This begs the question of "how" to make it work. >>>>> >>>>>2) Will anybody use it? >>>>> a) This begs the question of "can" anybody use it? >>>>> >>>>>Shawn is right when he says we are coming at this from the >>>>>"bottom-up." SEEK has been very focused on the mechanics of how to >>>>>take legacy data and modeling techniques and create a new >>>>>environment to "house" them and better utilize them. In the end, if >>>>>you can't answer question #1, it does matter whether you can answer >>>>>question #2. >>>>> >>>>>But at the same time I have felt that we have been a little too >>>>>focused on #1, or at the very least we haven't been spending enough >>>>>time on question #2. >>>>> >>>>>Both Nico and Fernando touched on two very important aspects of what >>>>>we are talking about. Nico's comment about attacking the problem >>>>>from "both" ends (top down and bottom up) seems very appropriate. >>>>>In fact, the more we know about the back-end the better we know what >>>>>"tools" or functionality we have to develop for the front-end and >>>>>how best they can interact. >>>>> >>>>>Fernando's comment touches on the core of what concerns me the most, >>>>>and it is the realization of question #2 >>>>>His comment: "/I also think that the major impediment to an >>>>>understanding that requires a paradigm switch is the early >>>>>idealization of a graphical user interface/." Or more appropriately >>>>>known as "the seduction of the GUI." (Soon to be a Broadway play ;-) ). >>>>> >>>>>We absolutely have to create a tool that scientists can use. So this >>>>>means we have to create a tool that "engages" the way they think >>>>>about modeling problems. Note that I used the word "engage", meaning >>>>>the tool doesn't to be an exact reflection of their process for >>>>>creating a models and doing analysis, but if has to be close enough >>>>>to make them want to "step up to the plate" and "take a swing for >>>>>the fence" as it were. >>>>> >>>>>In many ways too, Fernando's comment touch on the the problem I have >>>>>always had with Kepler. The UI is completely intertwined with the >>>>>model definition and the analysis specification. It has nearly zero >>>>>flexibility in how one "views" the "process" of entering in the >>>>>model. (As a side note, the UI is one of the harder aspects of >>>>>Kepler to tailor) >>>>> >>>>>In a perfect world of time and budgets it would be nice to create a >>>>>tool that has standalone Modeling and Analysis Definition Language, >>>>>then a core standalone analysis/simulation engine, and lastly a set >>>>>of GUI tools that assist the scientists in creating the models and >>>>>monitoring the execution. Notice how the GUI came last? The GUI >>>>>needs to be born out of the underlying technology instead of >>>>>defining it. >>>>> >>>>>I am a realist and I understand how much functionality Kepler brings >>>>>to the table, it gives us such a head start in AMS. Maybe we need to >>>>>start thinking about a more "conceptual" tool that fits in front of >>>>>Kelper, but before that we need to really understand how the average >>>>>scientist would approach the SEEK technology. I'll say this as a >>>>>joke: "but that pretty much excludes any scientist working on SEEK," >>>>>but it is true. Never let the folks creating the technology tell you >>>>>how the technology should be used, that's the responsibility of the >>>>>user. >>>>> >>>>>I know the word "use case" has been thrown around daily as if it >>>>>were confetti, but I think the time is approaching where we need to >>>>>really focus on developing some "real" end-user use cases. I think a >>>>>much bigger effort and emphasis needs to be placed on the >>>>>"top-down." And some of the ideas presented in this entire thread is >>>>>a good start. >>>>> >>>>> >>>> >>>>Great synthesis and points Rod. >>>> >>>>(Note that I un-cc'd kepler-dev, since this discussion is very much >>>>seek-specific) >>>> >>>>I agree with you, Nico, and Ferdinando that we need top-down >>>>development (i.e., an understanding of the targeted user problems and >>>>needs, and how best to address these via end-user interfaces) as well >>>>as bottom-up development (underlying technology, etc.). >>>> >>>>I think that in general, we are at a point in the project where we >>>>have a good idea of the kinds of solutions we can provide (e.g., with >>>>EcoGrid, Kepler, SMS, Taxon, and so on). >>>> >>>>And, we are beginning to get to the point where we are >>>>building/needing user interfaces: we are beginning to >>>>design/implement add-ons to Kepler, e.g., for EcoGrid querying and >>>>Ontology-enabled actor/dataset browsing; GrOWL is becoming our >>>>user-interface for ontologies; we are designing a user interface for >>>>annotating actors and datasets (for datasets, there are also UIs such >>>>as Morhpo); and working on taxonomic browsing. >>>> >>>>I definately think that now in the project is a great time to take a >>>>step back, and as these interfaces are being designed and implemented >>>>(as well as the lower-level technology), to be informed by real >>>>user-needs. >>>> >>>> >>>>Here is what I think needs to be done to do an effective top-down >>>>design: >>>> >>>>1. Clearly identify our target user group(s) and the general benefit >>>>we believe SEEK will provide to these groups. In particular, who are >>>>we developing the "SEEK system" for, and what are their >>>>problems/needs and constraints. Capture this as a report. (As an >>>>aside, it will be very hard to evaluate the utility of SEEK without >>>>understanding who it is meant to help, and how it is meant to help >>>>them.) >>>> >>>>2. Assemble a representive group of target users. As Rod suggests, >>>>there should be participants that are independent of SEEK. [I >>>>attended one meeting that was close to this in Abq in Aug. 2003 -- >>>>have there been others?] >>>> >>>>3. Identify the needs of the representive group in terms of SEEK. >>>>These might be best represented as "user stories" (i.e., scenarios) >>>>initially as opposed to use cases. I think there are two types of >>>>user stories that are extremely benefitial: (1) as a scenario of how >>>>some process works now, e.g., the story of a scientist that needed to >>>>run a niche model; (2) ask the user to tell us "how you would like >>>>the system to work" for the stories from 1. >>>> >>>>4. Synthesize the user stories into a set of target use cases that >>>>touch a wide range of functionality. Develop and refine the use cases. >>>> >>>>5. From the use cases and user constraints, design one or more >>>>"storyboard" user interfaces, or the needed user interface components >>>>from the use cases. At this point, there may be different possible >>>>interfaces, e.g., a high-level ontology based interface as suggested >>>>by Ferdinando and a low-level Kepler-based interface. This is where >>>>we need to be creative to address user needs. >>>> >>>>6. Get feedback from the target users on the "storyboard" interfaces >>>>(i.e., let them evaluate the interfaces). Revisit the user stories >>>>via the storyboards. Refine the second part of 3, and iterate 5 and 6. >>>> >>>>7. Develop one or more "prototypes" (i.e., the interface with canned >>>>functionality). Let the user group play with it, get feedback, and >>>>iterate. >>>> >>>>8. The result should be "the" user interface. >>>> >>>> >>>>One of the most important parts of this process is to identify the >>>>desired characteristics of the target users, and to pick a >>>>representative group of users that can lead to the widest array of >>>>use-cases/user-stories that are most benefitial to the target users. >>>> >>>>For example, we have primarily focused on niche-modeling as the use >>>>case. (This isn't a great example, but bear with me) If our sample >>>>user group only consisted of scientists that did niche modeling, or >>>>if this were our target user group, we would probably build a user >>>>interface around, and specific to niche modeling (i.e., niche >>>>modeling should become an integral, and probably embedded, part of >>>>the interface). Of course, for us, this isn't necessarily true >>>>because we know we have a more general target user group. But, >>>>hopefully you get the point. >>>> >>>> >>>>shawn >>>> >>>> >>>> >>>> >>>>>Rod >>>>> >>>>> >>>>>Deana Pennington wrote: >>>>> >>>>> >>>>> >>>>>>In thinking about the Kepler UI, it has occurred to me that it >>>>>>would really be nice if the ontologies that we construct to >>>>>>organize the actors into categories, could also be used in a >>>>>>high-level workflow design phase. For example, in the niche >>>>>>modeling workflow, GARP, neural networks, GRASP and many other >>>>>>algorithms could be used for that one step in the workflow. Those >>>>>>algorithms would all be organized under some high-level hierarchy >>>>>>("StatisticalModels"). Another example is the Pre-sample step, >>>>>>where we are using the GARP pre-sample algorithm, but other >>>>>>sampling algorithms could be substituted. There should be a >>>>>>high-level "Sampling" concept, under which different sampling >>>>>>algorithms would be organized. During the design phase, the user >>>>>>could construct a workflow based on these high level concepts >>>>>>(Sampling and StatisticalModel), then bind an actor (already >>>>>>implemented or using Chad's new actor) in a particular view of that >>>>>>workflow. So, a workflow would be designed at a high conceptual >>>>>>level, and have multiple views, binding different algorithms, and >>>>>>those different views would be logically linked through the high >>>>>>level workflow. The immediate case is the GARP workflow we are >>>>>>designing will need another version for the neural network >>>>>>algorithm, and that version will be virtually an exact replicate >>>>>>except for that actor. Seems like it would be better to have one >>>>>>workflow with different views... >>>>>> >>>>>>I hope the above is coherent...in reading it, I'm not sure that it >>>>>>is :-) >>>>>> >>>>>>Deana >>>>>> >>>>>> >>>>>> >>>>>> >>>>_______________________________________________ >>>>seek-dev mailing list >>>>seek-dev at ecoinformatics.org >>>>http://www.ecoinformatics.org/mailman/listinfo/seek-dev >>>> >>>> >>> >>> >>> >>_______________________________________________ >>seek-kr-sms mailing list >>seek-kr-sms at ecoinformatics.org >>http://www.ecoinformatics.org/mailman/listinfo/seek-kr-sms >> >> -- ******** Deana D. Pennington, PhD Long-term Ecological Research Network Office UNM Biology Department MSC03 2020 1 University of New Mexico Albuquerque, NM 87131-0001 505-272-7288 (office) 505 272-7080 (fax) From bowers at sdsc.edu Mon Jun 14 12:28:59 2004 From: bowers at sdsc.edu (Shawn Bowers) Date: Mon, 14 Jun 2004 12:28:59 -0700 Subject: [seek-dev] Re: [seek-kr-sms] UI In-Reply-To: <40CDF4E6.7070703@lternet.edu> References: <40C629D8.1080107@lternet.edu> <40C9BBE1.7010304@ku.edu> <40CA16F4.9060204@sdsc.edu> <40CA1A33.7090600@lternet.edu> <40CDA082.5050602@ku.edu> <1087228283.4452.32.camel@basil.snr.uvm.edu> <40CDF4E6.7070703@lternet.edu> Message-ID: <40CDFC7B.4000901@sdsc.edu> Deana Pennington wrote: > Although I understand the desire to fully flesh out the intended user's > thinking and needs by soliciting feedback from an extensive group of > non-SEEK users, I'm pretty sure that any survey/test that we tried to > conduct right now, with nothing but the current Kepler app to work with, > will be severely hampered by their ability to envision anything else. I don't know if you are referring here to what I suggested below, but if so, this isn't what I was saying at all. shawn > > Deana > > > Ferdinando Villa wrote: > >> One way I would frame this discussion, thinking about the comment about >> "visual modeling and analysis language" and the whole UI issue, is that >> we need to start a synthesis (top-down) effort aimed at understanding >> what's the language that shapes an ecologist's thinking when they >> approach a problem, and characterize its relationship with the two >> conceptual frameworks we've been concentrating on so far: the KR >> framework and the workflow framework (in their abstract nature, before >> going down to OWL and Ptolemy, and WITHOUT one thought to any pretty >> screenshot!). The exercise should highlight whether we need to (a) have >> enough of one - maybe slightly extended - and infer the other, (b) find >> something that sits in the middle, or (c) find something totally >> different. This done, we should be able to easily define the visual >> language that most closely embodies it. >> >> Back to personal opinions, I'll just add that it's my belief that this >> process, although it needs very open minds, doesn't necessarily have to >> be very long and very hard, and I think we have all the pieces in place >> to quickly prototype the right UI (as opposed to the "advanced" one!) >> when the idea is clear, without having to distance ourselves much from >> things as they stand now... >> >> ferd >> >> On Mon, 2004-06-14 at 08:56, Rod Spears wrote: >> >> >>> In many ways I think the current user-interface work for Kepler is >>> almost orthoginal to this discussion. >>> >>> There are many issues with the current UI that need to be fixed ASAP, >>> but I don't think it should keep us from getting a group together to >>> start down the path that Shawn has outlined. >>> >>> If we (and we should) take a more process oriented approach to >>> developing the UI this work really has little, if anything, to do >>> with Kepler for quite sometime. >>> >>> As I see it the Kepler UI is really the "advanced" UI for SEEK. There >>> is a whole lot of work that needs to go on before that. >>> >>> Deana has a very valid point as to how to begin this work >>> with/without the usability position being filled. At the same time, >>> many different aspects of the UI are being to take shape and time is >>> of the essence. >>> >>> Rod >>> >>> >>> Deana Pennington wrote: >>> >>> >>> >>>> Shawn & Rod, >>>> >>>> I think these are all great suggestions, and we've been discussimg >>>> putting together a group of ecologists for a couple of days of >>>> testing, but: >>>> >>>> 1) we thought that there are some major issues with the interface as >>>> it stands right now that need to be fixed before we try to get a >>>> group together, and >>>> 2) a decision needs to made about the useability engineer position, >>>> so that person can be involved right from the start in user testing >>>> and UI design >>>> >>>> So, I think we should table this discussion until the above 2 things >>>> are resolved. It's obvious that this needs to be addressed soon. >>>> >>>> Deana >>>> >>>> >>>> Shawn Bowers wrote: >>>> >>>> >>>> >>>>> Rod Spears wrote: >>>>> >>>>> >>>>> >>>>>> (This is a general reply to the entire thread that is on >>>>>> seek-kr-sms): >>>>>> >>>>>> In the end, there are really two very simple questions about what >>>>>> we are all doing on SEEK: >>>>>> >>>>>> 1) Can we make it work? >>>>>> a) This begs the question of "how" to make it work. >>>>>> >>>>>> 2) Will anybody use it? >>>>>> a) This begs the question of "can" anybody use it? >>>>>> >>>>>> Shawn is right when he says we are coming at this from the >>>>>> "bottom-up." SEEK has been very focused on the mechanics of how to >>>>>> take legacy data and modeling techniques and create a new >>>>>> environment to "house" them and better utilize them. In the end, >>>>>> if you can't answer question #1, it does matter whether you can >>>>>> answer question #2. >>>>>> >>>>>> But at the same time I have felt that we have been a little too >>>>>> focused on #1, or at the very least we haven't been spending >>>>>> enough time on question #2. >>>>>> >>>>>> Both Nico and Fernando touched on two very important aspects of >>>>>> what we are talking about. Nico's comment about attacking the >>>>>> problem from "both" ends (top down and bottom up) seems very >>>>>> appropriate. In fact, the more we know about the back-end the >>>>>> better we know what "tools" or functionality we have to develop >>>>>> for the front-end and how best they can interact. >>>>>> >>>>>> Fernando's comment touches on the core of what concerns me the >>>>>> most, and it is the realization of question #2 >>>>>> His comment: "/I also think that the major impediment to an >>>>>> understanding that requires a paradigm switch is the early >>>>>> idealization of a graphical user interface/." Or more >>>>>> appropriately known as "the seduction of the GUI." (Soon to be a >>>>>> Broadway play ;-) ). >>>>>> >>>>>> We absolutely have to create a tool that scientists can use. So >>>>>> this means we have to create a tool that "engages" the way they >>>>>> think about modeling problems. Note that I used the word "engage", >>>>>> meaning the tool doesn't to be an exact reflection of their >>>>>> process for creating a models and doing analysis, but if has to be >>>>>> close enough to make them want to "step up to the plate" and "take >>>>>> a swing for the fence" as it were. >>>>>> >>>>>> In many ways too, Fernando's comment touch on the the problem I >>>>>> have always had with Kepler. The UI is completely intertwined with >>>>>> the model definition and the analysis specification. It has nearly >>>>>> zero flexibility in how one "views" the "process" of entering in >>>>>> the model. (As a side note, the UI is one of the harder aspects of >>>>>> Kepler to tailor) >>>>>> >>>>>> In a perfect world of time and budgets it would be nice to create >>>>>> a tool that has standalone Modeling and Analysis Definition >>>>>> Language, then a core standalone analysis/simulation engine, and >>>>>> lastly a set of GUI tools that assist the scientists in creating >>>>>> the models and monitoring the execution. Notice how the GUI came >>>>>> last? The GUI needs to be born out of the underlying technology >>>>>> instead of defining it. >>>>>> >>>>>> I am a realist and I understand how much functionality Kepler >>>>>> brings to the table, it gives us such a head start in AMS. Maybe >>>>>> we need to start thinking about a more "conceptual" tool that fits >>>>>> in front of Kelper, but before that we need to really understand >>>>>> how the average scientist would approach the SEEK technology. I'll >>>>>> say this as a joke: "but that pretty much excludes any scientist >>>>>> working on SEEK," but it is true. Never let the folks creating the >>>>>> technology tell you how the technology should be used, that's the >>>>>> responsibility of the user. >>>>>> >>>>>> I know the word "use case" has been thrown around daily as if it >>>>>> were confetti, but I think the time is approaching where we need >>>>>> to really focus on developing some "real" end-user use cases. I >>>>>> think a much bigger effort and emphasis needs to be placed on the >>>>>> "top-down." And some of the ideas presented in this entire thread >>>>>> is a good start. >>>>>> >>>>> >>>>> >>>>> Great synthesis and points Rod. >>>>> >>>>> (Note that I un-cc'd kepler-dev, since this discussion is very much >>>>> seek-specific) >>>>> >>>>> I agree with you, Nico, and Ferdinando that we need top-down >>>>> development (i.e., an understanding of the targeted user problems >>>>> and needs, and how best to address these via end-user interfaces) >>>>> as well as bottom-up development (underlying technology, etc.). >>>>> >>>>> I think that in general, we are at a point in the project where we >>>>> have a good idea of the kinds of solutions we can provide (e.g., >>>>> with EcoGrid, Kepler, SMS, Taxon, and so on). >>>>> >>>>> And, we are beginning to get to the point where we are >>>>> building/needing user interfaces: we are beginning to >>>>> design/implement add-ons to Kepler, e.g., for EcoGrid querying and >>>>> Ontology-enabled actor/dataset browsing; GrOWL is becoming our >>>>> user-interface for ontologies; we are designing a user interface >>>>> for annotating actors and datasets (for datasets, there are also >>>>> UIs such as Morhpo); and working on taxonomic browsing. >>>>> >>>>> I definately think that now in the project is a great time to take >>>>> a step back, and as these interfaces are being designed and >>>>> implemented (as well as the lower-level technology), to be informed >>>>> by real user-needs. >>>>> >>>>> >>>>> Here is what I think needs to be done to do an effective top-down >>>>> design: >>>>> >>>>> 1. Clearly identify our target user group(s) and the general >>>>> benefit we believe SEEK will provide to these groups. In >>>>> particular, who are we developing the "SEEK system" for, and what >>>>> are their problems/needs and constraints. Capture this as a >>>>> report. (As an aside, it will be very hard to evaluate the utility >>>>> of SEEK without understanding who it is meant to help, and how it >>>>> is meant to help them.) >>>>> >>>>> 2. Assemble a representive group of target users. As Rod suggests, >>>>> there should be participants that are independent of SEEK. [I >>>>> attended one meeting that was close to this in Abq in Aug. 2003 -- >>>>> have there been others?] >>>>> >>>>> 3. Identify the needs of the representive group in terms of SEEK. >>>>> These might be best represented as "user stories" (i.e., scenarios) >>>>> initially as opposed to use cases. I think there are two types of >>>>> user stories that are extremely benefitial: (1) as a scenario of >>>>> how some process works now, e.g., the story of a scientist that >>>>> needed to run a niche model; (2) ask the user to tell us "how you >>>>> would like the system to work" for the stories from 1. >>>>> >>>>> 4. Synthesize the user stories into a set of target use cases that >>>>> touch a wide range of functionality. Develop and refine the use >>>>> cases. >>>>> >>>>> 5. From the use cases and user constraints, design one or more >>>>> "storyboard" user interfaces, or the needed user interface >>>>> components from the use cases. At this point, there may be >>>>> different possible interfaces, e.g., a high-level ontology based >>>>> interface as suggested by Ferdinando and a low-level Kepler-based >>>>> interface. This is where we need to be creative to address user >>>>> needs. >>>>> >>>>> 6. Get feedback from the target users on the "storyboard" >>>>> interfaces (i.e., let them evaluate the interfaces). Revisit the >>>>> user stories via the storyboards. Refine the second part of 3, and >>>>> iterate 5 and 6. >>>>> >>>>> 7. Develop one or more "prototypes" (i.e., the interface with >>>>> canned functionality). Let the user group play with it, get >>>>> feedback, and iterate. >>>>> >>>>> 8. The result should be "the" user interface. >>>>> >>>>> >>>>> One of the most important parts of this process is to identify the >>>>> desired characteristics of the target users, and to pick a >>>>> representative group of users that can lead to the widest array of >>>>> use-cases/user-stories that are most benefitial to the target users. >>>>> >>>>> For example, we have primarily focused on niche-modeling as the use >>>>> case. (This isn't a great example, but bear with me) If our sample >>>>> user group only consisted of scientists that did niche modeling, or >>>>> if this were our target user group, we would probably build a user >>>>> interface around, and specific to niche modeling (i.e., niche >>>>> modeling should become an integral, and probably embedded, part of >>>>> the interface). Of course, for us, this isn't necessarily true >>>>> because we know we have a more general target user group. But, >>>>> hopefully you get the point. >>>>> >>>>> >>>>> shawn >>>>> >>>>> >>>>> >>>>> >>>>>> Rod >>>>>> >>>>>> >>>>>> Deana Pennington wrote: >>>>>> >>>>>> >>>>>> >>>>>>> In thinking about the Kepler UI, it has occurred to me that it >>>>>>> would really be nice if the ontologies that we construct to >>>>>>> organize the actors into categories, could also be used in a >>>>>>> high-level workflow design phase. For example, in the niche >>>>>>> modeling workflow, GARP, neural networks, GRASP and many other >>>>>>> algorithms could be used for that one step in the workflow. >>>>>>> Those algorithms would all be organized under some high-level >>>>>>> hierarchy ("StatisticalModels"). Another example is the >>>>>>> Pre-sample step, where we are using the GARP pre-sample >>>>>>> algorithm, but other sampling algorithms could be substituted. >>>>>>> There should be a high-level "Sampling" concept, under which >>>>>>> different sampling algorithms would be organized. During the >>>>>>> design phase, the user could construct a workflow based on these >>>>>>> high level concepts (Sampling and StatisticalModel), then bind an >>>>>>> actor (already implemented or using Chad's new actor) in a >>>>>>> particular view of that workflow. So, a workflow would be >>>>>>> designed at a high conceptual level, and have multiple views, >>>>>>> binding different algorithms, and those different views would be >>>>>>> logically linked through the high level workflow. The immediate >>>>>>> case is the GARP workflow we are designing will need another >>>>>>> version for the neural network algorithm, and that version will >>>>>>> be virtually an exact replicate except for that actor. Seems >>>>>>> like it would be better to have one workflow with different views... >>>>>>> >>>>>>> I hope the above is coherent...in reading it, I'm not sure that >>>>>>> it is :-) >>>>>>> >>>>>>> Deana >>>>>>> >>>>>>> >>>>>>> >>>>> >>>>> _______________________________________________ >>>>> seek-dev mailing list >>>>> seek-dev at ecoinformatics.org >>>>> http://www.ecoinformatics.org/mailman/listinfo/seek-dev >>>>> >>>> >>>> >>>> >>> >>> _______________________________________________ >>> seek-kr-sms mailing list >>> seek-kr-sms at ecoinformatics.org >>> http://www.ecoinformatics.org/mailman/listinfo/seek-kr-sms >>> > > From ludaesch at sdsc.edu Tue Jun 15 03:31:36 2004 From: ludaesch at sdsc.edu (Bertram Ludaescher) Date: Tue, 15 Jun 2004 03:31:36 -0700 (PDT) Subject: [seek-kr-sms] Re: [kepler-dev] introduction & distributed ptolemy In-Reply-To: <1087226719.4452.17.camel@basil.snr.uvm.edu> References: <5.1.0.14.2.20040612072426.02628540@mho.eecs.berkeley.edu> <16589.48492.642188.623683@multivac.sdsc.edu> <1087226719.4452.17.camel@basil.snr.uvm.edu> Message-ID: <16590.51898.427870.370793@multivac.sdsc.edu> >>>>> "FV" == Ferdinando Villa writes: FV> FV> On Mon, 2004-06-14 at 11:01, Bertram Ludaescher wrote: >> Ideally the ports of an actor should (or at least could) >> have multiple types: >> - the data type (including say XML Schema type), >> - the semantic type (e.g. a concept expression describing more formally >> what else might be known about the data flowing through the port) >> [[aside for Ferdinando: our "reductionist/separatist approach" does not >> preclude forever an integrated modeling solution - it's just bottom up >> to get sth useful soon/in finite time ;-]] FV> FV> Hi Bertram! one point - if you read "reductionistic" (which I probably FV> wrote somewhere) as "reductive" you're misinterpreting me - "we" FV> ecologists mostly see reduct. vs. holistic as complementary (with FV> hierarchical thinking as a possible integrating framework) so when we FV> say reductionistic, we mean exactly what you also mean... one GOOD way FV> to look at the problem, usually the most practical, easier to study, FV> while often the least conducive to synthetic understanding... but NOT FV> separatist!!!! oh, that's good news (that we probably mean the same thing). BTW: by separatist I didn't mean it in the sense of say, "Pais Basque" but rather in the sense of "sorting things out". FV> Philosophy aside, here's a more SEEK-specific provocation: don't you FV> think that the "conceptual/holistic/top-down" approach is the one that FV> needs the semantic types, while the "workflow/reductionist/ptolemy" FV> approach would be just fine with just the "storage/machine" types of FV> whatever virtual machine AMS will be? If you view the latter as just the low level implementation platform, then yes. But I think it can be more than that and hopefully also serve us throguh all the levels from conceptual workflows/models to a machine-level dataflow engine (or grid workflow if nothing else) FV> Also: where do the "transformations" belong (scaling, units)? I'd argue FV> they belong "mechanically" to the reductionistic side - just like all FV> other actors, created to calculate a concept - and if the user don't FV> need to see them, it's not because we hide them up in the conceptual FV> description, but because they're actors in the workflow, and the FV> conceptual description that users work with is not the workflow. I see. Indeed, workflows (or dataflows for that matter) might "smell" a bit more procedural than what one would like. But don't you think it is a suitable formalism to describe analytical processes and procedures? What else would one use? Sometimes (often) a scientist might know exactly the algorithm, dataflow network, procedure she wants to execute. Why not allow the scientist to express the desired flow as such? FV> FV> Maybe we're mixing the sides up somewhat, and if so, is this ok... or is FV> it going to postpone the beautiful "moment of clarity" when we all FV> realize that we've all been thinking the same thing all along? probably that moment of clarity and realization will come along sometime soon ... if not, we need further research .. that's also good =B-) cheers Bertram FV> FV> Cheers, FV> ferdinand FV> FV> >> - the event consumption/production type (useful for scheduling a la >> SDF) >> - the communication type (through the Ptolemy/Kepler client, directly >> via say FTP or HTTP) etc >> >> At some levels of modeling ones does explicitely hide such detail from >> the modeler/user but at other levels this might be a good way of >> overcoming some scalability issues (if you have terabyte data streams >> you want them to go directly where they need to) >> >> A related problem of web servies (as actors) is that they send results >> back to the caller (Kepler) and don't forward them to the subsequent >> actor making large data transfers virtually impossible.. >> >> A simple extension to the web service model (anyone knows whether >> that's already done???) would allow for data to include *references* >> so that a process would be able return to Kepler just a reference to >> the result data and that reference would be passed on to the consuming >> actor who then understands how to derefernce it. This simple >> extension seems to be an easy solution to what we called before the >> 3rd party transfer problem: >> --> [Actor A] ---> [ Actor B] --> ... >> >> To stream large data set D from A to B w/o going through >> Ptolemy/Kepler one can simply send instead a handle &D and then B, >> upon receiving &D, understands and dereferenes it by calling the >> appropriate protocol (FTP/gridFTP, HTTP, SRB,...) >> >> Note that there are already explicit Kepler actors (SRBread/SRBwrite, >> gridFTP) for large data transfer. More elegant would it be to just >> send handles in the form, e.g., dereference(http://.....) >> Note that the special tag 'derefence' is needed since not every URL >> should be dereferenced (a URL can be perfectly valid data all by >> itself) >> >> It would be good if we could (a) define our extensions in line with >> web services extensions that deal with dereferencing message parts (if >> such exists) and (b) can work on a joint >> Kepler/Ptolemy/Roadnet/SEEK/SDM etc approach (in fact, Kepler is such >> a joint forum for co-designing this together..) >> >> Bertram >> >> PS Tobin: I recently met Kent and heard good news about ORB access in >> Kepler already. You can also check with Efrat at SDSC on 3rd party >> transfer issues while you're at SDSC.. >> >> >>>>> "EAL" == Edward A Lee writes: EAL> EAL> At 05:48 PM 6/11/2004 -0700, Tobin Fricke wrote: >> >> A basic question I have is, is there a defined network transport for >> >> Ptolemy relations? I expect that this question isn't really well-formed >> >> as I still have some reading to do on how relations actually work. >> >> Nonetheless, there is the question of, if we have different instances of >> >> Ptolemy talking to each other across the network, how are the data streams >> >> transmitted? In our case one option is to use the ORB as the stream >> >> transport, equipping each sub-model with ORB source and ORB sink >> >> components; and perhaps this could be done implicitly to automatically >> >> distribute a model across the network. But this line of thinking is >> >> strongly tied to the idea of data streams and may not be appropriate for >> >> the more general notion of relations in Ptolemy. EAL> EAL> We have done quite a bit of experimentation with distributed EAL> Ptolemy II models, but haven't completely settled on any one EAL> approach... Most of the recent work in this area has been EAL> done by Yang Zhao, whom I've cc'd for additional comments... EAL> Here are some notes: EAL> EAL> - A model can contain a component that is defined elsewhere EAL> on the network, referenced at a URL. There is a demo EAL> in the quick tour that runs a submodel that sits on our EAL> web server. EAL> EAL> - The Corba library provides a mechanism for transporting EAL> tokens from one model to another using either push or EAL> pull style interactions. The software is in the EAL> ptolemy.actor.corba package, but there are currently EAL> no good (easily run) demos, and documentation is sparse. EAL> EAL> - The MobileModel actor accepts a model definition on an EAL> input port and then executes that model. Yang has used EAL> this with the Corba actors to build models where one EAL> model constructs another model and sends it to another EAL> machine on the network to execute. EAL> EAL> - The JXTA library (ptolemy.actor.lib.jxta) uses Sun's EAL> XML-based P2P mechanism. Yang has used this to construct EAL> a distributed chat room application. EAL> EAL> - The ptolemy.actor.lib.net has two actors DatagramReader EAL> and DatagramWriter that provide low-level mechanisms for EAL> models to communicate over the net. Three or four years EAL> ago Win Williams used this to created a distributed model EAL> where two computers on the net were connected to EAL> motor controllers and users could "arm wrestle" over EAL> the network ... when one of the users turned his motor, EAL> the other motor would turn, and they could fight each EAL> other, trying to turn the motors in opposite directions. EAL> EAL> - Some years ago we also did some experimentation with EAL> Sun's JINI P2P mechanism, but this has been largely EAL> supplanted by JXTA. EAL> EAL> - The security library (ptolemy.actor.lib.security) EAL> provides encryption and decryption and authentication EAL> based on digital signatures. EAL> EAL> Most of these mechanisms have not been well packaged, EAL> and we haven't worked out the "lifecycle management" issues EAL> (how to start up a distributed model systematically, how EAL> to manage network failures). EAL> EAL> In my view, working out these issues is a top priority... EAL> I would be delighted to work with you or anyone else on this... EAL> EAL> Edward EAL> EAL> EAL> EAL> EAL> EAL> ------------ EAL> Edward A. Lee, Professor EAL> 518 Cory Hall, UC Berkeley, Berkeley, CA 94720 EAL> phone: 510-642-0455, fax: 510-642-2739 EAL> eal at eecs.Berkeley.EDU, http://ptolemy.eecs.berkeley.edu/~eal EAL> EAL> _______________________________________________ EAL> kepler-dev mailing list EAL> kepler-dev at ecoinformatics.org EAL> http://www.ecoinformatics.org/mailman/listinfo/kepler-dev >> _______________________________________________ >> kepler-dev mailing list >> kepler-dev at ecoinformatics.org >> http://www.ecoinformatics.org/mailman/listinfo/kepler-dev FV> -- From ludaesch at sdsc.edu Tue Jun 15 03:47:20 2004 From: ludaesch at sdsc.edu (Bertram Ludaescher) Date: Tue, 15 Jun 2004 03:47:20 -0700 (PDT) Subject: [seek-dev] Re: [seek-kr-sms] UI In-Reply-To: <40CDE853.7080909@ku.edu> References: <40C629D8.1080107@lternet.edu> <40C9BBE1.7010304@ku.edu> <40CA16F4.9060204@sdsc.edu> <40CA1A33.7090600@lternet.edu> <40CDA082.5050602@ku.edu> <1087228283.4452.32.camel@basil.snr.uvm.edu> <40CDE853.7080909@ku.edu> Message-ID: <16590.53810.464561.681996@multivac.sdsc.edu> >>>>> "RS" == Rod Spears writes: RS> RS> I agree with Ferdinando and the entire problem can be boiled down to his RS> quote "/we need to start a synthesis (top-down) effort aimed at RS> understanding what's the language that shapes an ecologist's thinking RS> when they approach a problem/" RS> RS> Of course, I think the word "language" is both literal and figurative. RS> RS> I disagree with the notion that Kepler is a "/visual modeling and RS> analysis language/." Or if it is that, it is at too low level and the RS> moment entirely too difficult for the non-SEEK scientists to use. The boxes with arrows between them is about as abstract as you can get I think (as well as concrete, depending on what the boxes stand for). Maybe we need to think about how IMA or other more conceptual approaches can be viewed as boxes with arrows between them. I might be stretching this a bit, but I think that in principle it should be possible to draw an ER or UML class diagram as an actor network. With the right "director" such an network might allow to query the conceptual ER/UML schema or some underlying databases conforming to that schema. It's certainly different from what Ptolemy folks had originally in mind, but given their very abstract approach and very hooks into the system I wouldn't be surprised if such a conceptual data modeling and querying tool were in fact quite easy to implement on top of Ptolemy. However even more interesting is, again, to think what structured approach to analysis pipelines or integrated modeling one would need.. Bertram RS> solution isn't "just fix Kepler's UI." Kepler has an important role to RS> play in the project, it is a very powerful tool as we all know. The RS> point is, let's not /force /it to play role it isn't necessarily meant RS> to play. RS> RS> Rod RS> RS> RS> Ferdinando Villa wrote: RS> >> One way I would frame this discussion, thinking about the comment about >> "visual modeling and analysis language" and the whole UI issue, is that >> we need to start a synthesis (top-down) effort aimed at understanding >> what's the language that shapes an ecologist's thinking when they >> approach a problem, and characterize its relationship with the two >> conceptual frameworks we've been concentrating on so far: the KR >> framework and the workflow framework (in their abstract nature, before >> going down to OWL and Ptolemy, and WITHOUT one thought to any pretty >> screenshot!). The exercise should highlight whether we need to (a) have >> enough of one - maybe slightly extended - and infer the other, (b) find >> something that sits in the middle, or (c) find something totally >> different. This done, we should be able to easily define the visual >> language that most closely embodies it. >> >> Back to personal opinions, I'll just add that it's my belief that this >> process, although it needs very open minds, doesn't necessarily have to >> be very long and very hard, and I think we have all the pieces in place >> to quickly prototype the right UI (as opposed to the "advanced" one!) >> when the idea is clear, without having to distance ourselves much from >> things as they stand now... >> >> ferd >> >> On Mon, 2004-06-14 at 08:56, Rod Spears wrote: >> >> >>> In many ways I think the current user-interface work for Kepler is >>> almost orthoginal to this discussion. >>> >>> There are many issues with the current UI that need to be fixed ASAP, >>> but I don't think it should keep us from getting a group together to >>> start down the path that Shawn has outlined. >>> >>> If we (and we should) take a more process oriented approach to >>> developing the UI this work really has little, if anything, to do with >>> Kepler for quite sometime. >>> >>> As I see it the Kepler UI is really the "advanced" UI for SEEK. There is >>> a whole lot of work that needs to go on before that. >>> >>> Deana has a very valid point as to how to begin this work with/without >>> the usability position being filled. At the same time, many different >>> aspects of the UI are being to take shape and time is of the essence. >>> >>> Rod >>> >>> >>> Deana Pennington wrote: >>> >>> >>> >>>> Shawn & Rod, >>>> >>>> I think these are all great suggestions, and we've been discussimg >>>> putting together a group of ecologists for a couple of days of >>>> testing, but: >>>> >>>> 1) we thought that there are some major issues with the interface as >>>> it stands right now that need to be fixed before we try to get a group >>>> together, and >>>> 2) a decision needs to made about the useability engineer position, so >>>> that person can be involved right from the start in user testing and >>>> UI design >>>> >>>> So, I think we should table this discussion until the above 2 things >>>> are resolved. It's obvious that this needs to be addressed soon. >>>> >>>> Deana >>>> >>>> >>>> Shawn Bowers wrote: >>>> >>>> >>>> >>>>> Rod Spears wrote: >>>>> >>>>> >>>>> >>>>>> (This is a general reply to the entire thread that is on seek-kr-sms): >>>>>> >>>>>> In the end, there are really two very simple questions about what we >>>>>> are all doing on SEEK: >>>>>> >>>>>> 1) Can we make it work? >>>>> a) This begs the question of "how" to make it work. >>>>>> >>>>>> 2) Will anybody use it? >>>>> a) This begs the question of "can" anybody use it? >>>>>> >>>>>> Shawn is right when he says we are coming at this from the >>>>>> "bottom-up." SEEK has been very focused on the mechanics of how to >>>>>> take legacy data and modeling techniques and create a new >>>>>> environment to "house" them and better utilize them. In the end, if >>>>>> you can't answer question #1, it does matter whether you can answer >>>>>> question #2. >>>>>> >>>>>> But at the same time I have felt that we have been a little too >>>>>> focused on #1, or at the very least we haven't been spending enough >>>>>> time on question #2. >>>>>> >>>>>> Both Nico and Fernando touched on two very important aspects of what >>>>>> we are talking about. Nico's comment about attacking the problem >>>>>> from "both" ends (top down and bottom up) seems very appropriate. >>>>>> In fact, the more we know about the back-end the better we know what >>>>>> "tools" or functionality we have to develop for the front-end and >>>>>> how best they can interact. >>>>>> >>>>>> Fernando's comment touches on the core of what concerns me the most, >>>>>> and it is the realization of question #2 >>>>>> His comment: "/I also think that the major impediment to an >>>>>> understanding that requires a paradigm switch is the early >>>>>> idealization of a graphical user interface/." Or more appropriately >>>>>> known as "the seduction of the GUI." (Soon to be a Broadway play ;-) ). >>>>>> >>>>>> We absolutely have to create a tool that scientists can use. So this >>>>>> means we have to create a tool that "engages" the way they think >>>>>> about modeling problems. Note that I used the word "engage", meaning >>>>>> the tool doesn't to be an exact reflection of their process for >>>>>> creating a models and doing analysis, but if has to be close enough >>>>>> to make them want to "step up to the plate" and "take a swing for >>>>>> the fence" as it were. >>>>>> >>>>>> In many ways too, Fernando's comment touch on the the problem I have >>>>>> always had with Kepler. The UI is completely intertwined with the >>>>>> model definition and the analysis specification. It has nearly zero >>>>>> flexibility in how one "views" the "process" of entering in the >>>>>> model. (As a side note, the UI is one of the harder aspects of >>>>>> Kepler to tailor) >>>>>> >>>>>> In a perfect world of time and budgets it would be nice to create a >>>>>> tool that has standalone Modeling and Analysis Definition Language, >>>>>> then a core standalone analysis/simulation engine, and lastly a set >>>>>> of GUI tools that assist the scientists in creating the models and >>>>>> monitoring the execution. Notice how the GUI came last? The GUI >>>>>> needs to be born out of the underlying technology instead of >>>>>> defining it. >>>>>> >>>>>> I am a realist and I understand how much functionality Kepler brings >>>>>> to the table, it gives us such a head start in AMS. Maybe we need to >>>>>> start thinking about a more "conceptual" tool that fits in front of >>>>>> Kelper, but before that we need to really understand how the average >>>>>> scientist would approach the SEEK technology. I'll say this as a >>>>>> joke: "but that pretty much excludes any scientist working on SEEK," >>>>>> but it is true. Never let the folks creating the technology tell you >>>>>> how the technology should be used, that's the responsibility of the >>>>>> user. >>>>>> >>>>>> I know the word "use case" has been thrown around daily as if it >>>>>> were confetti, but I think the time is approaching where we need to >>>>>> really focus on developing some "real" end-user use cases. I think a >>>>>> much bigger effort and emphasis needs to be placed on the >>>>>> "top-down." And some of the ideas presented in this entire thread is >>>>>> a good start. >>>>> >>>>>> >>>>> Great synthesis and points Rod. >>>>> >>>>> (Note that I un-cc'd kepler-dev, since this discussion is very much >>>>> seek-specific) >>>>> >>>>> I agree with you, Nico, and Ferdinando that we need top-down >>>>> development (i.e., an understanding of the targeted user problems and >>>>> needs, and how best to address these via end-user interfaces) as well >>>>> as bottom-up development (underlying technology, etc.). >>>>> >>>>> I think that in general, we are at a point in the project where we >>>>> have a good idea of the kinds of solutions we can provide (e.g., with >>>>> EcoGrid, Kepler, SMS, Taxon, and so on). >>>>> >>>>> And, we are beginning to get to the point where we are >>>>> building/needing user interfaces: we are beginning to >>>>> design/implement add-ons to Kepler, e.g., for EcoGrid querying and >>>>> Ontology-enabled actor/dataset browsing; GrOWL is becoming our >>>>> user-interface for ontologies; we are designing a user interface for >>>>> annotating actors and datasets (for datasets, there are also UIs such >>>>> as Morhpo); and working on taxonomic browsing. >>>>> >>>>> I definately think that now in the project is a great time to take a >>>>> step back, and as these interfaces are being designed and implemented >>>>> (as well as the lower-level technology), to be informed by real >>>>> user-needs. >>>>> >>>>> >>>>> Here is what I think needs to be done to do an effective top-down >>>>> design: >>>>> >>>>> 1. Clearly identify our target user group(s) and the general benefit >>>>> we believe SEEK will provide to these groups. In particular, who are >>>>> we developing the "SEEK system" for, and what are their >>>>> problems/needs and constraints. Capture this as a report. (As an >>>>> aside, it will be very hard to evaluate the utility of SEEK without >>>>> understanding who it is meant to help, and how it is meant to help >>>>> them.) >>>>> >>>>> 2. Assemble a representive group of target users. As Rod suggests, >>>>> there should be participants that are independent of SEEK. [I >>>>> attended one meeting that was close to this in Abq in Aug. 2003 -- >>>>> have there been others?] >>>>> >>>>> 3. Identify the needs of the representive group in terms of SEEK. >>>>> These might be best represented as "user stories" (i.e., scenarios) >>>>> initially as opposed to use cases. I think there are two types of >>>>> user stories that are extremely benefitial: (1) as a scenario of how >>>>> some process works now, e.g., the story of a scientist that needed to >>>>> run a niche model; (2) ask the user to tell us "how you would like >>>>> the system to work" for the stories from 1. >>>>> >>>>> 4. Synthesize the user stories into a set of target use cases that >>>>> touch a wide range of functionality. Develop and refine the use cases. >>>>> >>>>> 5. From the use cases and user constraints, design one or more >>>>> "storyboard" user interfaces, or the needed user interface components >>>>> from the use cases. At this point, there may be different possible >>>>> interfaces, e.g., a high-level ontology based interface as suggested >>>>> by Ferdinando and a low-level Kepler-based interface. This is where >>>>> we need to be creative to address user needs. >>>>> >>>>> 6. Get feedback from the target users on the "storyboard" interfaces >>>>> (i.e., let them evaluate the interfaces). Revisit the user stories >>>>> via the storyboards. Refine the second part of 3, and iterate 5 and 6. >>>>> >>>>> 7. Develop one or more "prototypes" (i.e., the interface with canned >>>>> functionality). Let the user group play with it, get feedback, and >>>>> iterate. >>>>> >>>>> 8. The result should be "the" user interface. >>>>> >>>>> >>>>> One of the most important parts of this process is to identify the >>>>> desired characteristics of the target users, and to pick a >>>>> representative group of users that can lead to the widest array of >>>>> use-cases/user-stories that are most benefitial to the target users. >>>>> >>>>> For example, we have primarily focused on niche-modeling as the use >>>>> case. (This isn't a great example, but bear with me) If our sample >>>>> user group only consisted of scientists that did niche modeling, or >>>>> if this were our target user group, we would probably build a user >>>>> interface around, and specific to niche modeling (i.e., niche >>>>> modeling should become an integral, and probably embedded, part of >>>>> the interface). Of course, for us, this isn't necessarily true >>>>> because we know we have a more general target user group. But, >>>>> hopefully you get the point. >>>>> >>>>> >>>>> shawn >>>>> >>>>> >>>>> >>>>> >>>>>> Rod >>>>>> >>>>>> >>>>>> Deana Pennington wrote: >>>>>> >>>>> >>>>>> >>>>>>> In thinking about the Kepler UI, it has occurred to me that it >>>>>>> would really be nice if the ontologies that we construct to >>>>>>> organize the actors into categories, could also be used in a >>>>>>> high-level workflow design phase. For example, in the niche >>>>>>> modeling workflow, GARP, neural networks, GRASP and many other >>>>>>> algorithms could be used for that one step in the workflow. Those >>>>>>> algorithms would all be organized under some high-level hierarchy >>>>>>> ("StatisticalModels"). Another example is the Pre-sample step, >>>>>>> where we are using the GARP pre-sample algorithm, but other >>>>>>> sampling algorithms could be substituted. There should be a >>>>>>> high-level "Sampling" concept, under which different sampling >>>>>>> algorithms would be organized. During the design phase, the user >>>>>>> could construct a workflow based on these high level concepts >>>>>>> (Sampling and StatisticalModel), then bind an actor (already >>>>>>> implemented or using Chad's new actor) in a particular view of that >>>>>>> workflow. So, a workflow would be designed at a high conceptual >>>>>>> level, and have multiple views, binding different algorithms, and >>>>>>> those different views would be logically linked through the high >>>>>>> level workflow. The immediate case is the GARP workflow we are >>>>>>> designing will need another version for the neural network >>>>>>> algorithm, and that version will be virtually an exact replicate >>>>>>> except for that actor. Seems like it would be better to have one >>>>>>> workflow with different views... >>>>>>> >>>>>>> I hope the above is coherent...in reading it, I'm not sure that it >>>>>>> is :-) >>>>>>> >>>>>>> Deana >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>> _______________________________________________ >>>>> seek-dev mailing list >>>>> seek-dev at ecoinformatics.org >>>>> http://www.ecoinformatics.org/mailman/listinfo/seek-dev >>>>> >>>>> >>>> >>>> >>> _______________________________________________ >>> seek-kr-sms mailing list >>> seek-kr-sms at ecoinformatics.org >>> http://www.ecoinformatics.org/mailman/listinfo/seek-kr-sms >>> >>> RS> RS> RS> RS> RS> RS> RS> RS> RS> RS> RS> I agree with Ferdinando and the entire problem can be boiled down to RS> his quote "we RS> need to start a synthesis (top-down) effort aimed at understanding RS> what's the language that shapes an ecologist's thinking when they RS> approach a problem"
RS>
RS> Of course, I think the word "language" is both literal and figurative.
RS>
RS> I disagree with the notion that Kepler is a "visual modeling and RS> analysis language." Or if it is that, it is at too low level and RS> the moment entirely too difficult for the non-SEEK scientists to use. RS> The solution isn't "just fix Kepler's UI."   Kepler has an important RS> role to play in the project, it is a very powerful tool as we all know. RS> The point is, let's not force it to play role it isn't RS> necessarily RS> meant to play.
RS>
RS> Rod
RS>
RS>
RS> Ferdinando Villa wrote: RS>
type="cite"> RS>
One way I would frame this discussion, thinking about the comment about
RS> "visual modeling and analysis language" and the whole UI issue, is that
RS> we need to start a synthesis (top-down) effort aimed at understanding
RS> what's the language that shapes an ecologist's thinking when they
RS> approach a problem, and characterize its relationship with the two
RS> conceptual frameworks we've been concentrating on so far: the KR
RS> framework and the workflow framework (in their abstract nature, before
RS> going down to OWL and Ptolemy, and WITHOUT one thought to any pretty
RS> screenshot!). The exercise should highlight whether we need to (a) have
RS> enough of one - maybe slightly extended - and infer the other, (b) find
RS> something that sits in the middle, or (c) find something totally
RS> different. This done, we should be able to easily define the visual
RS> language that most closely embodies it.
RS> 
RS> Back to personal opinions, I'll just add that it's my belief that this
RS> process, although it needs very open minds, doesn't necessarily have to
RS> be very long and very hard, and I think we have all the pieces in place
RS> to quickly prototype the right UI (as opposed to the "advanced" one!)
RS> when the idea is clear, without having to distance ourselves much from
RS> things as they stand now...
RS> 
RS> ferd
RS> 
RS> On Mon, 2004-06-14 at 08:56, Rod Spears wrote:
RS>   
RS>
RS>
In many ways I think the current user-interface work for Kepler is 
RS> almost orthoginal to this discussion.
RS> 
RS> There are many issues with the current UI that need to be fixed ASAP, 
RS> but I don't think it should keep us from getting a group together to 
RS> start down the path that Shawn has outlined.
RS> 
RS> If we (and we should) take a more process oriented approach to 
RS> developing the UI this work really has little, if anything, to do with 
RS> Kepler for quite sometime.
RS> 
RS> As I see it the Kepler UI is really the "advanced" UI for SEEK. There is 
RS> a whole lot of work that needs to go on before that.
RS> 
RS> Deana has a very valid point as to how to begin this work with/without 
RS> the usability position being filled. At the same time, many different 
RS> aspects of the UI are being to take shape and time is of the essence.
RS> 
RS> Rod
RS> 
RS> 
RS> Deana Pennington wrote:
RS> 
RS>     
RS>
RS>
Shawn & Rod,
RS> 
RS> I think these are all great suggestions, and we've been discussimg 
RS> putting together a group of ecologists for a couple of days of 
RS> testing, but:
RS> 
RS> 1) we thought that there are some major issues with the interface as 
RS> it stands right now that need to be fixed before we try to get a group 
RS> together, and
RS> 2) a decision needs to made about the useability engineer position, so 
RS> that person can be involved right from the start in user testing and 
RS> UI design
RS> 
RS> So, I think we should table this discussion until the above 2 things 
RS> are resolved.  It's obvious that this needs to be addressed soon.
RS> 
RS> Deana
RS> 
RS> 
RS> Shawn Bowers wrote:
RS> 
RS>       
RS>
RS>
Rod Spears wrote:
RS> 
RS>         
RS>
RS>
(This is a general reply to the entire thread that is on seek-kr-sms):
RS> 
RS> In the end, there are really two very simple questions about what we 
RS> are all doing on SEEK:
RS> 
RS> 1) Can we make it work?
RS>     a) This begs the question of "how" to make it work.
RS> 
RS> 2) Will anybody use it?
RS>     a) This begs the question of "can" anybody use it?
RS> 
RS> Shawn is right when he says we are coming at this from the 
RS> "bottom-up." SEEK has been very focused on the mechanics of how to 
RS> take legacy data and modeling techniques and create a new 
RS> environment to "house" them and better utilize them. In the end, if 
RS> you can't answer question #1, it does matter whether you can answer 
RS> question #2.
RS> 
RS> But at the same time I have felt that we have been a little too 
RS> focused on #1, or at the very least we haven't been spending enough 
RS> time on question #2.
RS> 
RS> Both Nico and Fernando touched on two very important aspects of what 
RS> we are talking about. Nico's comment about attacking the problem 
RS> from "both" ends (top down and bottom up)  seems very appropriate. 
RS> In fact, the more we know about the back-end the better we know what 
RS> "tools" or functionality we have to develop for the front-end and 
RS> how best they can interact.
RS> 
RS> Fernando's comment touches on the core of what concerns me the most, 
RS> and it is the realization of question #2
RS> His comment: "/I also think that the major impediment to an 
RS> understanding that requires a paradigm switch is the early 
RS> idealization of a graphical user interface/." Or more appropriately 
RS> known as "the seduction of the GUI." (Soon to be a Broadway play ;-) ).
RS> 
RS> We absolutely have to create a tool that scientists can use. So this 
RS> means we have to create a tool that "engages" the way they think 
RS> about modeling problems. Note that I used the word "engage", meaning 
RS> the tool doesn't to be an exact reflection of their process for 
RS> creating a models and doing analysis, but if has to be close enough 
RS> to make them want to "step up to the plate" and "take a swing for 
RS> the fence" as it were.
RS> 
RS> In many ways too, Fernando's comment touch on the the problem I have 
RS> always had with Kepler. The UI is completely intertwined with the 
RS> model definition and the analysis specification. It has nearly zero 
RS> flexibility in how one "views" the "process" of entering in the 
RS> model. (As a side note, the UI is one of the harder aspects of 
RS> Kepler to tailor)
RS> 
RS> In a perfect world of time and budgets it would be nice to create a 
RS> tool that has standalone Modeling and Analysis Definition Language, 
RS> then a core standalone analysis/simulation engine, and lastly a set 
RS> of GUI tools that assist the scientists in creating the models and 
RS> monitoring the execution. Notice how the GUI came last? The GUI 
RS> needs to be born out of the underlying technology instead of 
RS> defining it.
RS> 
RS> I am a realist and I understand how much functionality Kepler brings 
RS> to the table, it gives us such a head start in AMS. Maybe we need to 
RS> start thinking about a more "conceptual" tool that fits in front of 
RS> Kelper, but before that we need to really understand how the average 
RS> scientist would approach the SEEK technology. I'll say this as a 
RS> joke: "but that pretty much excludes any scientist working on SEEK," 
RS> but it is true. Never let the folks creating the technology tell you 
RS> how the technology should be used, that's the responsibility of the 
RS> user.
RS> 
RS> I know the word "use case" has been thrown around daily as if it 
RS> were confetti, but I think the time is approaching where we need to 
RS> really focus on developing some "real" end-user use cases. I think a 
RS> much bigger effort and emphasis needs to be placed on the 
RS> "top-down." And some of the ideas presented in this entire thread is 
RS> a good start.
RS>           
RS>
RS>
RS> Great synthesis and points Rod.
RS> 
RS> (Note that I un-cc'd kepler-dev, since this discussion is very much 
RS> seek-specific)
RS> 
RS> I agree with you, Nico, and Ferdinando that we need top-down 
RS> development (i.e., an understanding of the targeted user problems and 
RS> needs, and how best to address these via end-user interfaces) as well 
RS> as bottom-up development (underlying technology, etc.).
RS> 
RS> I think that in general, we are at a point in the project where we 
RS> have a good idea of the kinds of solutions we can provide (e.g., with 
RS> EcoGrid, Kepler, SMS, Taxon, and so on).
RS> 
RS> And, we are beginning to get to the point where we are 
RS> building/needing user interfaces: we are beginning to 
RS> design/implement add-ons to Kepler, e.g., for EcoGrid querying and 
RS> Ontology-enabled actor/dataset browsing; GrOWL is becoming our 
RS> user-interface for ontologies; we are designing a user interface for 
RS> annotating actors and datasets (for datasets, there are also UIs such 
RS> as Morhpo); and working on taxonomic browsing.
RS> 
RS> I definately think that now in the project is a great time to take a 
RS> step back, and as these interfaces are being designed and implemented 
RS> (as well as the lower-level technology), to be informed by real 
RS> user-needs.
RS> 
RS> 
RS> Here is what I think needs to be done to do an effective top-down 
RS> design:
RS> 
RS> 1. Clearly identify our target user group(s) and the general benefit 
RS> we believe SEEK will provide to these groups. In particular, who are 
RS> we developing the "SEEK system" for, and what are their 
RS> problems/needs and constraints.  Capture this as a report. (As an 
RS> aside, it will be very hard to evaluate the utility of SEEK without 
RS> understanding who it is meant to help, and how it is meant to help 
RS> them.)
RS> 
RS> 2. Assemble a representive group of target users. As Rod suggests, 
RS> there should be participants that are independent of SEEK. [I 
RS> attended one meeting that was close to this in Abq in Aug. 2003 -- 
RS> have there been others?]
RS> 
RS> 3. Identify the needs of the representive group in terms of SEEK. 
RS> These might be best represented as "user stories" (i.e., scenarios) 
RS> initially as opposed to use cases.  I think there are two types of 
RS> user stories that are extremely benefitial: (1) as a scenario of how 
RS> some process works now, e.g., the story of a scientist that needed to 
RS> run a niche model; (2) ask the user to tell us "how you would like 
RS> the system to work" for the stories from 1.
RS> 
RS> 4. Synthesize the user stories into a set of target use cases that 
RS> touch a wide range of functionality.  Develop and refine the use cases.
RS> 
RS> 5. From the use cases and user constraints, design one or more 
RS> "storyboard" user interfaces, or the needed user interface components 
RS> from the use cases.  At this point, there may be different possible 
RS> interfaces, e.g., a high-level ontology based interface as suggested 
RS> by Ferdinando and a low-level Kepler-based interface.  This is where 
RS> we need to be creative to address user needs.
RS> 
RS> 6. Get feedback from the target users on the "storyboard" interfaces 
RS> (i.e., let them evaluate the interfaces). Revisit the user stories 
RS> via the storyboards. Refine the second part of 3, and iterate 5 and 6.
RS> 
RS> 7. Develop one or more "prototypes" (i.e., the interface with canned 
RS> functionality). Let the user group play with it, get feedback, and 
RS> iterate.
RS> 
RS> 8. The result should be "the" user interface.
RS> 
RS> 
RS> One of the most important parts of this process is to identify the 
RS> desired characteristics of the target users, and to pick a 
RS> representative group of users that can lead to the widest array of 
RS> use-cases/user-stories that are most benefitial to the target users.
RS> 
RS> For example, we have primarily focused on niche-modeling as the use 
RS> case. (This isn't a great example, but bear with me) If our sample 
RS> user group only consisted of scientists that did niche modeling, or 
RS> if this were our target user group, we would probably build a user 
RS> interface around, and specific to niche modeling (i.e., niche 
RS> modeling should become an integral, and probably embedded, part of 
RS> the interface). Of course, for us, this isn't necessarily true 
RS> because we know we have a more general target user group. But, 
RS> hopefully you get the point.
RS> 
RS> 
RS> shawn
RS> 
RS> 
RS>         
RS>
RS>
Rod
RS> 
RS> 
RS> Deana Pennington wrote:
RS> 
RS>           
RS>
RS>
In thinking about the Kepler UI, it has occurred to me that it 
RS> would really be nice if the ontologies that we construct to 
RS> organize the actors into categories, could also be used in a 
RS> high-level workflow design phase.  For example, in the niche 
RS> modeling workflow, GARP, neural networks, GRASP and many other 
RS> algorithms could be used for that one step in the workflow.  Those 
RS> algorithms would all be organized under some high-level hierarchy 
RS> ("StatisticalModels").  Another example is the Pre-sample step, 
RS> where we are using the GARP pre-sample algorithm, but other 
RS> sampling algorithms could be substituted.  There should be a 
RS> high-level "Sampling" concept, under which different sampling 
RS> algorithms would be organized.  During the design phase, the user 
RS> could construct a workflow based on these high level concepts 
RS> (Sampling and StatisticalModel), then bind an actor (already 
RS> implemented or using Chad's new actor) in a particular view of that 
RS> workflow.  So, a  workflow would be designed at a high conceptual 
RS> level, and have multiple views, binding different algorithms, and 
RS> those different views would be logically linked through the high 
RS> level workflow.  The immediate case is the GARP workflow we are 
RS> designing will need another version for the neural network 
RS> algorithm, and that version will be virtually an exact replicate 
RS> except for that actor.  Seems like it would be better to have one 
RS> workflow with different views...
RS> 
RS> I hope the above is coherent...in reading it, I'm not sure that it 
RS> is  :-)
RS> 
RS> Deana
RS> 
RS> 
RS>             
RS>
RS>
RS>
_______________________________________________
RS> seek-dev mailing list
RS>   href="mailto:seek-dev at ecoinformatics.org">seek-dev at ecoinformatics.org
RS>   href="http://www.ecoinformatics.org/mailman/listinfo/seek-dev">http://www.ecoinformatics.org/mailman/listinfo/seek-dev
RS>         
RS>
RS>
RS>       
RS>
RS>
_______________________________________________
RS> seek-kr-sms mailing list
RS>   href="mailto:seek-kr-sms at ecoinformatics.org">seek-kr-sms at ecoinformatics.org
RS>   href="http://www.ecoinformatics.org/mailman/listinfo/seek-kr-sms">http://www.ecoinformatics.org/mailman/listinfo/seek-kr-sms
RS>     
RS>
RS>
RS>
RS> RS> From fvilla at uvm.edu Tue Jun 15 07:31:31 2004 From: fvilla at uvm.edu (Ferdinando Villa) Date: Tue, 15 Jun 2004 10:31:31 -0400 Subject: [seek-kr-sms] (no subject) Message-ID: <1087309891.40cf084319694@webmail.uvm.edu> > I see. Indeed, workflows (or dataflows for that matter) might "smell" > a bit more procedural than what one would like. But don't you think it > is a suitable formalism to describe analytical processes and > procedures? What else would one use? Sometimes (often) a scientist > might know exactly the algorithm, dataflow network, procedure she > wants to execute. Why not allow the scientist to express the desired > flow as such? Yes - but I don't think we should let a less-than-entirely-typical use case determine the design. In fact, I think a better case can be made for the semantically aware workflow actually BEING (embodying) a concept - that can be modelled through a conceptual approach. It doesn't take much -even conceptually- to substitute the connection arrows with semantically aware relationships. The mechanics of type checking (semantic and otherwise, if I'm right in thinking that semantic types compatibility subsumes storage type compatibility) reduces to that of instance validation and is naturally done by SMS. And what you get is a nice logical diagram that AMS doesn't have to work hard to transpose into a workflow (the [unpublished] IMA design may illustrate that if interested). Anyway - I also don't see why the user may not see or directly create the workflow if preferred. I just think that most of our users will prefer another approach - and that the other approach is cleaner and more SEEK-like. Coming out of the closet, what I'm thinking of as a possible, more intuitive visual/processing environment for SEEK is a two-level view in kepler, where the top view is essentially the IMA, and the second level (a tabbed UI can be used to switch, like in STELLA) is Kepler - users may work at whatever level they want, but I think the roles of SMS and AMS are cleanly separable, and that's what I think is still a bit muddy in this discussion. I tend to see the separation as a design advantage. BTW we also discussed to have a third "view" for a control view with sliders for parameters etc - so it may fit right in, graphical design-wise. Here an IMA-inspired idea of how that semantic view may look like: The top-level semantic view is essentially a GrOWL-like environment (Kepler's UI works fine, too) to instantiate a concept that incarnates the RESULT of your workflow, using the proper subclass: e.g. "ecological diversity" is abstract and cannot be instantiated, but "Shannon diversity" can - and has associated code or declarations that will create the proper actors when the instance is created. So the user decides to create an instance of Shannon diversity (or Simpson diversity), and the semantic editor checks that all relationships are satisfied and conformant (again, a lot of IMA details for discussion of the "conformance" concept if interested - it's a bit more than isa: for the semantic type having to do with space/time/classification compatibility and representation), giving access to the Ecogrid to retrieve instances of related objects to link into an instance. The semantic equivalent of the "run" button is the "create" button - which creates the instance and fires up the SMS component that consults the KB about what actors calculate the states of each instance and connects them together, creating the proper topological sort and iteration sequence according to what time, space etc is adopted by the linked objects, and inserting transformations as needed. The default visualization mode of the final concept's state is defined by its semantics and its adopted observation context (do you REALLY want your innocent ecologist to have to create a visualization actor and connect it up?) Additional benefit: given that simpson's index and shannon use the same data ( == the concept has the same semantics), the user may later calculate it by simply switching the parent class of the main concept (an operation that can be presented obviously under a different name than "switching the class"). BTW there are other additional benefits but I've written enough - mainly the "concept-based data mining" I clumsily outlined in the Word rant I attached earlier. Back to workflows, It does not take much to envision a conceptual approach like this for any explicit workflow - just give names to the arrows, make them cardinality one, and define the actors in terms of what their state is, not what they do (usually it's the same thing). We may work out some workflows as an exercise. In my experience, few workflows need that, because they always serve to calculate something and that something is enough to characterize it all. Even with the fully worked out workflow, a difference is that you don't have to loop over a dataseries or the polygons of a map, or worry about arrays. If your innocent ecologist wants to (and believe me, s/he doesn't) she can do it in the workflow view... The nice thing here is that, messy as it may seem, everything is actually very clean if done this way: your model, pipeline or whatever, can be expressed and stored in RDF, point to OWL ontologies, and be translated at will into whatever workflow language you need as long as the actor collection is characterized properly. What I'm implementing in the IMA is a "workflow policy" - a template class that can be substituted in a runtime system to create e.g. an "interpreter" that calculates stuff right away (e.g. when modeling collaboratively through the web), or compile the workflow in very efficient, template-based C++ and gives you an executable for those large spatial models my folks like to make. A good intermediate representation goes a long way! Cheers ferd > FV> > FV> Maybe we're mixing the sides up somewhat, and if so, is this ok... or is > FV> it going to postpone the beautiful "moment of clarity" when we all > FV> realize that we've all been thinking the same thing all along? > > probably that moment of clarity and realization will come along > sometime soon ... if not, we need further research .. that's also good > =B-) > > cheers > > Bertram > > FV> > FV> Cheers, > FV> ferdinand > FV> > FV> > >> - the event consumption/production type (useful for scheduling a la > >> SDF) > >> - the communication type (through the Ptolemy/Kepler client, directly > >> via say FTP or HTTP) etc > >> > >> At some levels of modeling ones does explicitely hide such detail from > >> the modeler/user but at other levels this might be a good way of > >> overcoming some scalability issues (if you have terabyte data streams > >> you want them to go directly where they need to) > >> > >> A related problem of web servies (as actors) is that they send results > >> back to the caller (Kepler) and don't forward them to the subsequent > >> actor making large data transfers virtually impossible.. > >> > >> A simple extension to the web service model (anyone knows whether > >> that's already done???) would allow for data to include *references* > >> so that a process would be able return to Kepler just a reference to > >> the result data and that reference would be passed on to the consuming > >> actor who then understands how to derefernce it. This simple > >> extension seems to be an easy solution to what we called before the > >> 3rd party transfer problem: > >> > --> [Actor A] ---> [ Actor B] --> ... > >> > >> To stream large data set D from A to B w/o going through > >> Ptolemy/Kepler one can simply send instead a handle &D and then B, > >> upon receiving &D, understands and dereferenes it by calling the > >> appropriate protocol (FTP/gridFTP, HTTP, SRB,...) > >> > >> Note that there are already explicit Kepler actors (SRBread/SRBwrite, > >> gridFTP) for large data transfer. More elegant would it be to just > >> send handles in the form, e.g., dereference(http://.....) > >> Note that the special tag 'derefence' is needed since not every URL > >> should be dereferenced (a URL can be perfectly valid data all by > >> itself) > >> > >> It would be good if we could (a) define our extensions in line with > >> web services extensions that deal with dereferencing message parts (if > >> such exists) and (b) can work on a joint > >> Kepler/Ptolemy/Roadnet/SEEK/SDM etc approach (in fact, Kepler is such > >> a joint forum for co-designing this together..) > >> > >> Bertram > >> > >> PS Tobin: I recently met Kent and heard good news about ORB access in > >> Kepler already. You can also check with Efrat at SDSC on 3rd party > >> transfer issues while you're at SDSC.. > >> > >> >>>>> "EAL" == Edward A Lee writes: > EAL> > EAL> At 05:48 PM 6/11/2004 -0700, Tobin Fricke wrote: > >> >> A basic question I have is, is there a defined network transport for > >> >> Ptolemy relations? I expect that this question isn't really well-formed > >> >> as I still have some reading to do on how relations actually work. > >> >> Nonetheless, there is the question of, if we have different instances of > >> >> Ptolemy talking to each other across the network, how are the data streams > >> >> transmitted? In our case one option is to use the ORB as the stream > >> >> transport, equipping each sub-model with ORB source and ORB sink > >> >> components; and perhaps this could be done implicitly to automatically > >> >> distribute a model across the network. But this line of thinking is > >> >> strongly tied to the idea of data streams and may not be appropriate for > >> >> the more general notion of relations in Ptolemy. > EAL> > EAL> We have done quite a bit of experimentation with distributed > EAL> Ptolemy II models, but haven't completely settled on any one > EAL> approach... Most of the recent work in this area has been > EAL> done by Yang Zhao, whom I've cc'd for additional comments... > EAL> Here are some notes: > EAL> > EAL> - A model can contain a component that is defined elsewhere > EAL> on the network, referenced at a URL. There is a demo > EAL> in the quick tour that runs a submodel that sits on our > EAL> web server. > EAL> > EAL> - The Corba library provides a mechanism for transporting > EAL> tokens from one model to another using either push or > EAL> pull style interactions. The software is in the > EAL> ptolemy.actor.corba package, but there are currently > EAL> no good (easily run) demos, and documentation is sparse. > EAL> > EAL> - The MobileModel actor accepts a model definition on an > EAL> input port and then executes that model. Yang has used > EAL> this with the Corba actors to build models where one > EAL> model constructs another model and sends it to another > EAL> machine on the network to execute. > EAL> > EAL> - The JXTA library (ptolemy.actor.lib.jxta) uses Sun's > EAL> XML-based P2P mechanism. Yang has used this to construct > EAL> a distributed chat room application. > EAL> > EAL> - The ptolemy.actor.lib.net has two actors DatagramReader > EAL> and DatagramWriter that provide low-level mechanisms for > EAL> models to communicate over the net. Three or four years > EAL> ago Win Williams used this to created a distributed model > EAL> where two computers on the net were connected to > EAL> motor controllers and users could "arm wrestle" over > EAL> the network ... when one of the users turned his motor, > EAL> the other motor would turn, and they could fight each > EAL> other, trying to turn the motors in opposite directions. > EAL> > EAL> - Some years ago we also did some experimentation with > EAL> Sun's JINI P2P mechanism, but this has been largely > EAL> supplanted by JXTA. > EAL> > EAL> - The security library (ptolemy.actor.lib.security) > EAL> provides encryption and decryption and authentication > EAL> based on digital signatures. > EAL> > EAL> Most of these mechanisms have not been well packaged, > EAL> and we haven't worked out the "lifecycle management" issues > EAL> (how to start up a distributed model systematically, how > EAL> to manage network failures). > EAL> > EAL> In my view, working out these issues is a top priority... > EAL> I would be delighted to work with you or anyone else on this... > EAL> > EAL> Edward > EAL> > EAL> > EAL> > EAL> > EAL> > EAL> ------------ > EAL> Edward A. Lee, Professor > EAL> 518 Cory Hall, UC Berkeley, Berkeley, CA 94720 > EAL> phone: 510-642-0455, fax: 510-642-2739 > EAL> eal at eecs.Berkeley.EDU, http://ptolemy.eecs.berkeley.edu/~eal > EAL> > EAL> _______________________________________________ > EAL> kepler-dev mailing list > EAL> kepler-dev at ecoinformatics.org > EAL> http://www.ecoinformatics.org/mailman/listinfo/kepler-dev > >> _______________________________________________ > >> kepler-dev mailing list > >> kepler-dev at ecoinformatics.org > >> http://www.ecoinformatics.org/mailman/listinfo/kepler-dev > FV> -- -- From rods at ku.edu Tue Jun 15 08:06:59 2004 From: rods at ku.edu (Rod Spears) Date: Tue, 15 Jun 2004 10:06:59 -0500 Subject: [seek-dev] Re: [seek-kr-sms] UI In-Reply-To: <16590.53810.464561.681996@multivac.sdsc.edu> References: <40C629D8.1080107@lternet.edu> <40C9BBE1.7010304@ku.edu> <40CA16F4.9060204@sdsc.edu> <40CA1A33.7090600@lternet.edu> <40CDA082.5050602@ku.edu> <1087228283.4452.32.camel@basil.snr.uvm.edu> <40CDE853.7080909@ku.edu> <16590.53810.464561.681996@multivac.sdsc.edu> Message-ID: <40CF1093.9020708@ku.edu> I am not against boxes with arrows. But the problem is when you completely integrate the boxes and arrows with a tool that only "thinks" about boxes and arrows one particular way. Rod Bertram Ludaescher wrote: >>>>>>"RS" == Rod Spears writes: >>>>>> >>>>>> >RS> >RS> I agree with Ferdinando and the entire problem can be boiled down to his >RS> quote "/we need to start a synthesis (top-down) effort aimed at >RS> understanding what's the language that shapes an ecologist's thinking >RS> when they approach a problem/" >RS> >RS> Of course, I think the word "language" is both literal and figurative. >RS> >RS> I disagree with the notion that Kepler is a "/visual modeling and >RS> analysis language/." Or if it is that, it is at too low level and the >RS> moment entirely too difficult for the non-SEEK scientists to use. The > >boxes with arrows between them is about as abstract as you can get I >think (as well as concrete, depending on what the boxes stand for). > >Maybe we need to think about how IMA or other more conceptual >approaches can be viewed as boxes with arrows between them. I might be >stretching this a bit, but I think that in principle it should be >possible to draw an ER or UML class diagram as an actor network. With >the right "director" such an network might allow to query the >conceptual ER/UML schema or some underlying databases conforming to >that schema. It's certainly different from what Ptolemy folks had >originally in mind, but given their very abstract approach and very >hooks into the system I wouldn't be surprised if such a conceptual >data modeling and querying tool were in fact quite easy to implement >on top of Ptolemy. > >However even more interesting is, again, to think what structured >approach to analysis pipelines or integrated modeling one would >need.. > >Bertram >RS> solution isn't "just fix Kepler's UI." Kepler has an important role to >RS> play in the project, it is a very powerful tool as we all know. The >RS> point is, let's not /force /it to play role it isn't necessarily meant >RS> to play. >RS> >RS> Rod >RS> >RS> >RS> Ferdinando Villa wrote: >RS> > > >>>One way I would frame this discussion, thinking about the comment about >>>"visual modeling and analysis language" and the whole UI issue, is that >>>we need to start a synthesis (top-down) effort aimed at understanding >>>what's the language that shapes an ecologist's thinking when they >>>approach a problem, and characterize its relationship with the two >>>conceptual frameworks we've been concentrating on so far: the KR >>>framework and the workflow framework (in their abstract nature, before >>>going down to OWL and Ptolemy, and WITHOUT one thought to any pretty >>>screenshot!). The exercise should highlight whether we need to (a) have >>>enough of one - maybe slightly extended - and infer the other, (b) find >>>something that sits in the middle, or (c) find something totally >>>different. This done, we should be able to easily define the visual >>>language that most closely embodies it. >>> >>>Back to personal opinions, I'll just add that it's my belief that this >>>process, although it needs very open minds, doesn't necessarily have to >>>be very long and very hard, and I think we have all the pieces in place >>>to quickly prototype the right UI (as opposed to the "advanced" one!) >>>when the idea is clear, without having to distance ourselves much from >>>things as they stand now... >>> >>>ferd >>> >>>On Mon, 2004-06-14 at 08:56, Rod Spears wrote: >>> >>> >>> >>> >>>>In many ways I think the current user-interface work for Kepler is >>>>almost orthoginal to this discussion. >>>> >>>>There are many issues with the current UI that need to be fixed ASAP, >>>>but I don't think it should keep us from getting a group together to >>>>start down the path that Shawn has outlined. >>>> >>>>If we (and we should) take a more process oriented approach to >>>>developing the UI this work really has little, if anything, to do with >>>>Kepler for quite sometime. >>>> >>>>As I see it the Kepler UI is really the "advanced" UI for SEEK. There is >>>>a whole lot of work that needs to go on before that. >>>> >>>>Deana has a very valid point as to how to begin this work with/without >>>>the usability position being filled. At the same time, many different >>>>aspects of the UI are being to take shape and time is of the essence. >>>> >>>>Rod >>>> >>>> >>>>Deana Pennington wrote: >>>> >>>> >>>> >>>> >>>> >>>>>Shawn & Rod, >>>>> >>>>>I think these are all great suggestions, and we've been discussimg >>>>>putting together a group of ecologists for a couple of days of >>>>>testing, but: >>>>> >>>>>1) we thought that there are some major issues with the interface as >>>>>it stands right now that need to be fixed before we try to get a group >>>>>together, and >>>>>2) a decision needs to made about the useability engineer position, so >>>>>that person can be involved right from the start in user testing and >>>>>UI design >>>>> >>>>>So, I think we should table this discussion until the above 2 things >>>>>are resolved. It's obvious that this needs to be addressed soon. >>>>> >>>>>Deana >>>>> >>>>> >>>>>Shawn Bowers wrote: >>>>> >>>>> >>>>> >>>>> >>>>> >>>>>>Rod Spears wrote: >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>>>(This is a general reply to the entire thread that is on seek-kr-sms): >>>>>>> >>>>>>>In the end, there are really two very simple questions about what we >>>>>>>are all doing on SEEK: >>>>>>> >>>>>>>1) Can we make it work? >>>>>>> >>>>>>> >>>>>> a) This begs the question of "how" to make it work. >>>>>> >>>>>> >>>>>>>2) Will anybody use it? >>>>>>> >>>>>>> >>>>>> a) This begs the question of "can" anybody use it? >>>>>> >>>>>> >>>>>>>Shawn is right when he says we are coming at this from the >>>>>>>"bottom-up." SEEK has been very focused on the mechanics of how to >>>>>>>take legacy data and modeling techniques and create a new >>>>>>>environment to "house" them and better utilize them. In the end, if >>>>>>>you can't answer question #1, it does matter whether you can answer >>>>>>>question #2. >>>>>>> >>>>>>>But at the same time I have felt that we have been a little too >>>>>>>focused on #1, or at the very least we haven't been spending enough >>>>>>>time on question #2. >>>>>>> >>>>>>>Both Nico and Fernando touched on two very important aspects of what >>>>>>>we are talking about. Nico's comment about attacking the problem >>>>>>>from "both" ends (top down and bottom up) seems very appropriate. >>>>>>>In fact, the more we know about the back-end the better we know what >>>>>>>"tools" or functionality we have to develop for the front-end and >>>>>>>how best they can interact. >>>>>>> >>>>>>>Fernando's comment touches on the core of what concerns me the most, >>>>>>>and it is the realization of question #2 >>>>>>>His comment: "/I also think that the major impediment to an >>>>>>>understanding that requires a paradigm switch is the early >>>>>>>idealization of a graphical user interface/." Or more appropriately >>>>>>>known as "the seduction of the GUI." (Soon to be a Broadway play ;-) ). >>>>>>> >>>>>>>We absolutely have to create a tool that scientists can use. So this >>>>>>>means we have to create a tool that "engages" the way they think >>>>>>>about modeling problems. Note that I used the word "engage", meaning >>>>>>>the tool doesn't to be an exact reflection of their process for >>>>>>>creating a models and doing analysis, but if has to be close enough >>>>>>>to make them want to "step up to the plate" and "take a swing for >>>>>>>the fence" as it were. >>>>>>> >>>>>>>In many ways too, Fernando's comment touch on the the problem I have >>>>>>>always had with Kepler. The UI is completely intertwined with the >>>>>>>model definition and the analysis specification. It has nearly zero >>>>>>>flexibility in how one "views" the "process" of entering in the >>>>>>>model. (As a side note, the UI is one of the harder aspects of >>>>>>>Kepler to tailor) >>>>>>> >>>>>>>In a perfect world of time and budgets it would be nice to create a >>>>>>>tool that has standalone Modeling and Analysis Definition Language, >>>>>>>then a core standalone analysis/simulation engine, and lastly a set >>>>>>>of GUI tools that assist the scientists in creating the models and >>>>>>>monitoring the execution. Notice how the GUI came last? The GUI >>>>>>>needs to be born out of the underlying technology instead of >>>>>>>defining it. >>>>>>> >>>>>>>I am a realist and I understand how much functionality Kepler brings >>>>>>>to the table, it gives us such a head start in AMS. Maybe we need to >>>>>>>start thinking about a more "conceptual" tool that fits in front of >>>>>>>Kelper, but before that we need to really understand how the average >>>>>>>scientist would approach the SEEK technology. I'll say this as a >>>>>>>joke: "but that pretty much excludes any scientist working on SEEK," >>>>>>>but it is true. Never let the folks creating the technology tell you >>>>>>>how the technology should be used, that's the responsibility of the >>>>>>>user. >>>>>>> >>>>>>>I know the word "use case" has been thrown around daily as if it >>>>>>>were confetti, but I think the time is approaching where we need to >>>>>>>really focus on developing some "real" end-user use cases. I think a >>>>>>>much bigger effort and emphasis needs to be placed on the >>>>>>>"top-down." And some of the ideas presented in this entire thread is >>>>>>>a good start. >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>>Great synthesis and points Rod. >>>>>> >>>>>>(Note that I un-cc'd kepler-dev, since this discussion is very much >>>>>>seek-specific) >>>>>> >>>>>>I agree with you, Nico, and Ferdinando that we need top-down >>>>>>development (i.e., an understanding of the targeted user problems and >>>>>>needs, and how best to address these via end-user interfaces) as well >>>>>>as bottom-up development (underlying technology, etc.). >>>>>> >>>>>>I think that in general, we are at a point in the project where we >>>>>>have a good idea of the kinds of solutions we can provide (e.g., with >>>>>>EcoGrid, Kepler, SMS, Taxon, and so on). >>>>>> >>>>>>And, we are beginning to get to the point where we are >>>>>>building/needing user interfaces: we are beginning to >>>>>>design/implement add-ons to Kepler, e.g., for EcoGrid querying and >>>>>>Ontology-enabled actor/dataset browsing; GrOWL is becoming our >>>>>>user-interface for ontologies; we are designing a user interface for >>>>>>annotating actors and datasets (for datasets, there are also UIs such >>>>>>as Morhpo); and working on taxonomic browsing. >>>>>> >>>>>>I definately think that now in the project is a great time to take a >>>>>>step back, and as these interfaces are being designed and implemented >>>>>>(as well as the lower-level technology), to be informed by real >>>>>>user-needs. >>>>>> >>>>>> >>>>>>Here is what I think needs to be done to do an effective top-down >>>>>>design: >>>>>> >>>>>>1. Clearly identify our target user group(s) and the general benefit >>>>>>we believe SEEK will provide to these groups. In particular, who are >>>>>>we developing the "SEEK system" for, and what are their >>>>>>problems/needs and constraints. Capture this as a report. (As an >>>>>>aside, it will be very hard to evaluate the utility of SEEK without >>>>>>understanding who it is meant to help, and how it is meant to help >>>>>>them.) >>>>>> >>>>>>2. Assemble a representive group of target users. As Rod suggests, >>>>>>there should be participants that are independent of SEEK. [I >>>>>>attended one meeting that was close to this in Abq in Aug. 2003 -- >>>>>>have there been others?] >>>>>> >>>>>>3. Identify the needs of the representive group in terms of SEEK. >>>>>>These might be best represented as "user stories" (i.e., scenarios) >>>>>>initially as opposed to use cases. I think there are two types of >>>>>>user stories that are extremely benefitial: (1) as a scenario of how >>>>>>some process works now, e.g., the story of a scientist that needed to >>>>>>run a niche model; (2) ask the user to tell us "how you would like >>>>>>the system to work" for the stories from 1. >>>>>> >>>>>>4. Synthesize the user stories into a set of target use cases that >>>>>>touch a wide range of functionality. Develop and refine the use cases. >>>>>> >>>>>>5. From the use cases and user constraints, design one or more >>>>>>"storyboard" user interfaces, or the needed user interface components >>>>>>from the use cases. At this point, there may be different possible >>>>>>interfaces, e.g., a high-level ontology based interface as suggested >>>>>>by Ferdinando and a low-level Kepler-based interface. This is where >>>>>>we need to be creative to address user needs. >>>>>> >>>>>>6. Get feedback from the target users on the "storyboard" interfaces >>>>>>(i.e., let them evaluate the interfaces). Revisit the user stories >>>>>>via the storyboards. Refine the second part of 3, and iterate 5 and 6. >>>>>> >>>>>>7. Develop one or more "prototypes" (i.e., the interface with canned >>>>>>functionality). Let the user group play with it, get feedback, and >>>>>>iterate. >>>>>> >>>>>>8. The result should be "the" user interface. >>>>>> >>>>>> >>>>>>One of the most important parts of this process is to identify the >>>>>>desired characteristics of the target users, and to pick a >>>>>>representative group of users that can lead to the widest array of >>>>>>use-cases/user-stories that are most benefitial to the target users. >>>>>> >>>>>>For example, we have primarily focused on niche-modeling as the use >>>>>>case. (This isn't a great example, but bear with me) If our sample >>>>>>user group only consisted of scientists that did niche modeling, or >>>>>>if this were our target user group, we would probably build a user >>>>>>interface around, and specific to niche modeling (i.e., niche >>>>>>modeling should become an integral, and probably embedded, part of >>>>>>the interface). Of course, for us, this isn't necessarily true >>>>>>because we know we have a more general target user group. But, >>>>>>hopefully you get the point. >>>>>> >>>>>> >>>>>>shawn >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>>>Rod >>>>>>> >>>>>>> >>>>>>>Deana Pennington wrote: >>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>>>>In thinking about the Kepler UI, it has occurred to me that it >>>>>>>>would really be nice if the ontologies that we construct to >>>>>>>>organize the actors into categories, could also be used in a >>>>>>>>high-level workflow design phase. For example, in the niche >>>>>>>>modeling workflow, GARP, neural networks, GRASP and many other >>>>>>>>algorithms could be used for that one step in the workflow. Those >>>>>>>>algorithms would all be organized under some high-level hierarchy >>>>>>>>("StatisticalModels"). Another example is the Pre-sample step, >>>>>>>>where we are using the GARP pre-sample algorithm, but other >>>>>>>>sampling algorithms could be substituted. There should be a >>>>>>>>high-level "Sampling" concept, under which different sampling >>>>>>>>algorithms would be organized. During the design phase, the user >>>>>>>>could construct a workflow based on these high level concepts >>>>>>>>(Sampling and StatisticalModel), then bind an actor (already >>>>>>>>implemented or using Chad's new actor) in a particular view of that >>>>>>>>workflow. So, a workflow would be designed at a high conceptual >>>>>>>>level, and have multiple views, binding different algorithms, and >>>>>>>>those different views would be logically linked through the high >>>>>>>>level workflow. The immediate case is the GARP workflow we are >>>>>>>>designing will need another version for the neural network >>>>>>>>algorithm, and that version will be virtually an exact replicate >>>>>>>>except for that actor. Seems like it would be better to have one >>>>>>>>workflow with different views... >>>>>>>> >>>>>>>>I hope the above is coherent...in reading it, I'm not sure that it >>>>>>>>is :-) >>>>>>>> >>>>>>>>Deana >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>_______________________________________________ >>>>>>seek-dev mailing list >>>>>>seek-dev at ecoinformatics.org >>>>>>http://www.ecoinformatics.org/mailman/listinfo/seek-dev >>>>>> >>>>>> >>>>>> >>>>>> >>>>> >>>>> >>>>_______________________________________________ >>>>seek-kr-sms mailing list >>>>seek-kr-sms at ecoinformatics.org >>>>http://www.ecoinformatics.org/mailman/listinfo/seek-kr-sms >>>> >>>> >>>> >>>> >RS> >RS> >RS> >RS> >RS> >RS> >RS> >RS> >RS> >RS> >RS> I agree with Ferdinando and the entire problem can be boiled down to >RS> his quote "we >RS> need to start a synthesis (top-down) effort aimed at understanding >RS> what's the language that shapes an ecologist's thinking when they >RS> approach a problem"
>RS>
>RS> Of course, I think the word "language" is both literal and figurative.
>RS>
>RS> I disagree with the notion that Kepler is a "visual modeling and >RS> analysis language." Or if it is that, it is at too low level and >RS> the moment entirely too difficult for the non-SEEK scientists to use. >RS> The solution isn't "just fix Kepler's UI."   Kepler has an important >RS> role to play in the project, it is a very powerful tool as we all know. >RS> The point is, let's not force it to play role it isn't >RS> necessarily >RS> meant to play.
>RS>
>RS> Rod
>RS>
>RS>
>RS> Ferdinando Villa wrote: >RS>
RS> type="cite"> >RS>
One way I would frame this discussion, thinking about the comment about
>RS> "visual modeling and analysis language" and the whole UI issue, is that
>RS> we need to start a synthesis (top-down) effort aimed at understanding
>RS> what's the language that shapes an ecologist's thinking when they
>RS> approach a problem, and characterize its relationship with the two
>RS> conceptual frameworks we've been concentrating on so far: the KR
>RS> framework and the workflow framework (in their abstract nature, before
>RS> going down to OWL and Ptolemy, and WITHOUT one thought to any pretty
>RS> screenshot!). The exercise should highlight whether we need to (a) have
>RS> enough of one - maybe slightly extended - and infer the other, (b) find
>RS> something that sits in the middle, or (c) find something totally
>RS> different. This done, we should be able to easily define the visual
>RS> language that most closely embodies it.
>RS> 
>RS> Back to personal opinions, I'll just add that it's my belief that this
>RS> process, although it needs very open minds, doesn't necessarily have to
>RS> be very long and very hard, and I think we have all the pieces in place
>RS> to quickly prototype the right UI (as opposed to the "advanced" one!)
>RS> when the idea is clear, without having to distance ourselves much from
>RS> things as they stand now...
>RS> 
>RS> ferd
>RS> 
>RS> On Mon, 2004-06-14 at 08:56, Rod Spears wrote:
>RS>   
>RS>
>RS>
In many ways I think the current user-interface work for Kepler is 
>RS> almost orthoginal to this discussion.
>RS> 
>RS> There are many issues with the current UI that need to be fixed ASAP, 
>RS> but I don't think it should keep us from getting a group together to 
>RS> start down the path that Shawn has outlined.
>RS> 
>RS> If we (and we should) take a more process oriented approach to 
>RS> developing the UI this work really has little, if anything, to do with 
>RS> Kepler for quite sometime.
>RS> 
>RS> As I see it the Kepler UI is really the "advanced" UI for SEEK. There is 
>RS> a whole lot of work that needs to go on before that.
>RS> 
>RS> Deana has a very valid point as to how to begin this work with/without 
>RS> the usability position being filled. At the same time, many different 
>RS> aspects of the UI are being to take shape and time is of the essence.
>RS> 
>RS> Rod
>RS> 
>RS> 
>RS> Deana Pennington wrote:
>RS> 
>RS>     
>RS>
>RS>
Shawn & Rod,
>RS> 
>RS> I think these are all great suggestions, and we've been discussimg 
>RS> putting together a group of ecologists for a couple of days of 
>RS> testing, but:
>RS> 
>RS> 1) we thought that there are some major issues with the interface as 
>RS> it stands right now that need to be fixed before we try to get a group 
>RS> together, and
>RS> 2) a decision needs to made about the useability engineer position, so 
>RS> that person can be involved right from the start in user testing and 
>RS> UI design
>RS> 
>RS> So, I think we should table this discussion until the above 2 things 
>RS> are resolved.  It's obvious that this needs to be addressed soon.
>RS> 
>RS> Deana
>RS> 
>RS> 
>RS> Shawn Bowers wrote:
>RS> 
>RS>       
>RS>
>RS>
Rod Spears wrote:
>RS> 
>RS>         
>RS>
>RS>
(This is a general reply to the entire thread that is on seek-kr-sms):
>RS> 
>RS> In the end, there are really two very simple questions about what we 
>RS> are all doing on SEEK:
>RS> 
>RS> 1) Can we make it work?
>RS>     a) This begs the question of "how" to make it work.
>RS> 
>RS> 2) Will anybody use it?
>RS>     a) This begs the question of "can" anybody use it?
>RS> 
>RS> Shawn is right when he says we are coming at this from the 
>RS> "bottom-up." SEEK has been very focused on the mechanics of how to 
>RS> take legacy data and modeling techniques and create a new 
>RS> environment to "house" them and better utilize them. In the end, if 
>RS> you can't answer question #1, it does matter whether you can answer 
>RS> question #2.
>RS> 
>RS> But at the same time I have felt that we have been a little too 
>RS> focused on #1, or at the very least we haven't been spending enough 
>RS> time on question #2.
>RS> 
>RS> Both Nico and Fernando touched on two very important aspects of what 
>RS> we are talking about. Nico's comment about attacking the problem 
>RS> from "both" ends (top down and bottom up)  seems very appropriate. 
>RS> In fact, the more we know about the back-end the better we know what 
>RS> "tools" or functionality we have to develop for the front-end and 
>RS> how best they can interact.
>RS> 
>RS> Fernando's comment touches on the core of what concerns me the most, 
>RS> and it is the realization of question #2
>RS> His comment: "/I also think that the major impediment to an 
>RS> understanding that requires a paradigm switch is the early 
>RS> idealization of a graphical user interface/." Or more appropriately 
>RS> known as "the seduction of the GUI." (Soon to be a Broadway play ;-) ).
>RS> 
>RS> We absolutely have to create a tool that scientists can use. So this 
>RS> means we have to create a tool that "engages" the way they think 
>RS> about modeling problems. Note that I used the word "engage", meaning 
>RS> the tool doesn't to be an exact reflection of their process for 
>RS> creating a models and doing analysis, but if has to be close enough 
>RS> to make them want to "step up to the plate" and "take a swing for 
>RS> the fence" as it were.
>RS> 
>RS> In many ways too, Fernando's comment touch on the the problem I have 
>RS> always had with Kepler. The UI is completely intertwined with the 
>RS> model definition and the analysis specification. It has nearly zero 
>RS> flexibility in how one "views" the "process" of entering in the 
>RS> model. (As a side note, the UI is one of the harder aspects of 
>RS> Kepler to tailor)
>RS> 
>RS> In a perfect world of time and budgets it would be nice to create a 
>RS> tool that has standalone Modeling and Analysis Definition Language, 
>RS> then a core standalone analysis/simulation engine, and lastly a set 
>RS> of GUI tools that assist the scientists in creating the models and 
>RS> monitoring the execution. Notice how the GUI came last? The GUI 
>RS> needs to be born out of the underlying technology instead of 
>RS> defining it.
>RS> 
>RS> I am a realist and I understand how much functionality Kepler brings 
>RS> to the table, it gives us such a head start in AMS. Maybe we need to 
>RS> start thinking about a more "conceptual" tool that fits in front of 
>RS> Kelper, but before that we need to really understand how the average 
>RS> scientist would approach the SEEK technology. I'll say this as a 
>RS> joke: "but that pretty much excludes any scientist working on SEEK," 
>RS> but it is true. Never let the folks creating the technology tell you 
>RS> how the technology should be used, that's the responsibility of the 
>RS> user.
>RS> 
>RS> I know the word "use case" has been thrown around daily as if it 
>RS> were confetti, but I think the time is approaching where we need to 
>RS> really focus on developing some "real" end-user use cases. I think a 
>RS> much bigger effort and emphasis needs to be placed on the 
>RS> "top-down." And some of the ideas presented in this entire thread is 
>RS> a good start.
>RS>           
>RS>
>RS>
>RS> Great synthesis and points Rod.
>RS> 
>RS> (Note that I un-cc'd kepler-dev, since this discussion is very much 
>RS> seek-specific)
>RS> 
>RS> I agree with you, Nico, and Ferdinando that we need top-down 
>RS> development (i.e., an understanding of the targeted user problems and 
>RS> needs, and how best to address these via end-user interfaces) as well 
>RS> as bottom-up development (underlying technology, etc.).
>RS> 
>RS> I think that in general, we are at a point in the project where we 
>RS> have a good idea of the kinds of solutions we can provide (e.g., with 
>RS> EcoGrid, Kepler, SMS, Taxon, and so on).
>RS> 
>RS> And, we are beginning to get to the point where we are 
>RS> building/needing user interfaces: we are beginning to 
>RS> design/implement add-ons to Kepler, e.g., for EcoGrid querying and 
>RS> Ontology-enabled actor/dataset browsing; GrOWL is becoming our 
>RS> user-interface for ontologies; we are designing a user interface for 
>RS> annotating actors and datasets (for datasets, there are also UIs such 
>RS> as Morhpo); and working on taxonomic browsing.
>RS> 
>RS> I definately think that now in the project is a great time to take a 
>RS> step back, and as these interfaces are being designed and implemented 
>RS> (as well as the lower-level technology), to be informed by real 
>RS> user-needs.
>RS> 
>RS> 
>RS> Here is what I think needs to be done to do an effective top-down 
>RS> design:
>RS> 
>RS> 1. Clearly identify our target user group(s) and the general benefit 
>RS> we believe SEEK will provide to these groups. In particular, who are 
>RS> we developing the "SEEK system" for, and what are their 
>RS> problems/needs and constraints.  Capture this as a report. (As an 
>RS> aside, it will be very hard to evaluate the utility of SEEK without 
>RS> understanding who it is meant to help, and how it is meant to help 
>RS> them.)
>RS> 
>RS> 2. Assemble a representive group of target users. As Rod suggests, 
>RS> there should be participants that are independent of SEEK. [I 
>RS> attended one meeting that was close to this in Abq in Aug. 2003 -- 
>RS> have there been others?]
>RS> 
>RS> 3. Identify the needs of the representive group in terms of SEEK. 
>RS> These might be best represented as "user stories" (i.e., scenarios) 
>RS> initially as opposed to use cases.  I think there are two types of 
>RS> user stories that are extremely benefitial: (1) as a scenario of how 
>RS> some process works now, e.g., the story of a scientist that needed to 
>RS> run a niche model; (2) ask the user to tell us "how you would like 
>RS> the system to work" for the stories from 1.
>RS> 
>RS> 4. Synthesize the user stories into a set of target use cases that 
>RS> touch a wide range of functionality.  Develop and refine the use cases.
>RS> 
>RS> 5. From the use cases and user constraints, design one or more 
>RS> "storyboard" user interfaces, or the needed user interface components 
>RS> from the use cases.  At this point, there may be different possible 
>RS> interfaces, e.g., a high-level ontology based interface as suggested 
>RS> by Ferdinando and a low-level Kepler-based interface.  This is where 
>RS> we need to be creative to address user needs.
>RS> 
>RS> 6. Get feedback from the target users on the "storyboard" interfaces 
>RS> (i.e., let them evaluate the interfaces). Revisit the user stories 
>RS> via the storyboards. Refine the second part of 3, and iterate 5 and 6.
>RS> 
>RS> 7. Develop one or more "prototypes" (i.e., the interface with canned 
>RS> functionality). Let the user group play with it, get feedback, and 
>RS> iterate.
>RS> 
>RS> 8. The result should be "the" user interface.
>RS> 
>RS> 
>RS> One of the most important parts of this process is to identify the 
>RS> desired characteristics of the target users, and to pick a 
>RS> representative group of users that can lead to the widest array of 
>RS> use-cases/user-stories that are most benefitial to the target users.
>RS> 
>RS> For example, we have primarily focused on niche-modeling as the use 
>RS> case. (This isn't a great example, but bear with me) If our sample 
>RS> user group only consisted of scientists that did niche modeling, or 
>RS> if this were our target user group, we would probably build a user 
>RS> interface around, and specific to niche modeling (i.e., niche 
>RS> modeling should become an integral, and probably embedded, part of 
>RS> the interface). Of course, for us, this isn't necessarily true 
>RS> because we know we have a more general target user group. But, 
>RS> hopefully you get the point.
>RS> 
>RS> 
>RS> shawn
>RS> 
>RS> 
>RS>         
>RS>
>RS>
Rod
>RS> 
>RS> 
>RS> Deana Pennington wrote:
>RS> 
>RS>           
>RS>
>RS>
In thinking about the Kepler UI, it has occurred to me that it 
>RS> would really be nice if the ontologies that we construct to 
>RS> organize the actors into categories, could also be used in a 
>RS> high-level workflow design phase.  For example, in the niche 
>RS> modeling workflow, GARP, neural networks, GRASP and many other 
>RS> algorithms could be used for that one step in the workflow.  Those 
>RS> algorithms would all be organized under some high-level hierarchy 
>RS> ("StatisticalModels").  Another example is the Pre-sample step, 
>RS> where we are using the GARP pre-sample algorithm, but other 
>RS> sampling algorithms could be substituted.  There should be a 
>RS> high-level "Sampling" concept, under which different sampling 
>RS> algorithms would be organized.  During the design phase, the user 
>RS> could construct a workflow based on these high level concepts 
>RS> (Sampling and StatisticalModel), then bind an actor (already 
>RS> implemented or using Chad's new actor) in a particular view of that 
>RS> workflow.  So, a  workflow would be designed at a high conceptual 
>RS> level, and have multiple views, binding different algorithms, and 
>RS> those different views would be logically linked through the high 
>RS> level workflow.  The immediate case is the GARP workflow we are 
>RS> designing will need another version for the neural network 
>RS> algorithm, and that version will be virtually an exact replicate 
>RS> except for that actor.  Seems like it would be better to have one 
>RS> workflow with different views...
>RS> 
>RS> I hope the above is coherent...in reading it, I'm not sure that it 
>RS> is  :-)
>RS> 
>RS> Deana
>RS> 
>RS> 
>RS>             
>RS>
>RS>
>RS>
_______________________________________________
>RS> seek-dev mailing list
>RS> RS>  href="mailto:seek-dev at ecoinformatics.org">seek-dev at ecoinformatics.org
>RS> RS>  href="http://www.ecoinformatics.org/mailman/listinfo/seek-dev">http://www.ecoinformatics.org/mailman/listinfo/seek-dev
>RS>         
>RS>
>RS>
>RS>       
>RS>
>RS>
_______________________________________________
>RS> seek-kr-sms mailing list
>RS> RS>  href="mailto:seek-kr-sms at ecoinformatics.org">seek-kr-sms at ecoinformatics.org
>RS> RS>  href="http://www.ecoinformatics.org/mailman/listinfo/seek-kr-sms">http://www.ecoinformatics.org/mailman/listinfo/seek-kr-sms
>RS>     
>RS>
>RS>
>RS>
>RS> >RS> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mercury.nceas.ucsb.edu/ecoinformatics/pipermail/seek-kr-sms/attachments/20040615/f9423b13/attachment.htm From rods at ku.edu Tue Jun 15 08:14:25 2004 From: rods at ku.edu (Rod Spears) Date: Tue, 15 Jun 2004 10:14:25 -0500 Subject: [seek-dev] Re: [seek-kr-sms] UI In-Reply-To: <40CDF4E6.7070703@lternet.edu> References: <40C629D8.1080107@lternet.edu> <40C9BBE1.7010304@ku.edu> <40CA16F4.9060204@sdsc.edu> <40CA1A33.7090600@lternet.edu> <40CDA082.5050602@ku.edu> <1087228283.4452.32.camel@basil.snr.uvm.edu> <40CDF4E6.7070703@lternet.edu> Message-ID: <40CF1251.4040805@ku.edu> I don't think we have enough information to do anything at the moment, which basically what Shawn's message was about. We need to do some formal "use case" analysis to understand our user's "mental model" and what our users need before we can propose doing anything. In fact, my belief that Kepler is not necessarily the right tool for the "top-down" approach is rather premature. I think the only thing we know is that Kepler with a part of some portion of the solution. (Oh, and that boxes and arrows will be used somewhere ;-) .) Rod Deana Pennington wrote: > One way to look at this is to think about short-term (next year) vs > intermediate term (next 3 years of SEEK) vs long-term (decade+) > visions. I am completely sold on Ferdinando's approach for the > long-term. I don't think that it's something we can fully implement > within the SEEK time frame. But I do think that we can design some > short/intermediate-term objectives that will build on the things we > are already doing in Kepler and be consistent with the long view. Can > we (the "open-minded, interested SEEK participants of Ferdinando's > opinion below) come up with some very short term objectives that > include both approaches? Something implementable within the next few > months? If so, then I would be very interested in getting a user > group together to test 1) kepler alone, 2) IMA alone, and 3) a > prototype combined approach. Although I understand the desire to > fully flesh out the intended user's thinking and needs by soliciting > feedback from an extensive group of non-SEEK users, I'm pretty sure > that any survey/test that we tried to conduct right now, with nothing > but the current Kepler app to work with, will be severely hampered by > their ability to envision anything else. So, does anyone want to > propose something that would be implementable in the short term, that > merges top-down/bottom-up approaches as a prototype? > I have already suggested that we take the GARP pipeline and implement > it in IMA. In my original posting to this list, I suggested that we > take the limited number of actors that we are using, construct a > partial ontology that represents them and link it with whatever other > ontologies we are constructing, and design an interface that would > allow the user to work with those concepts rather than directly with > the kepler actors, which could be selected and bound to the concept in > the final design step. It seems (to me) that this (or some other > simplistic merging of approaches) should not be that hard to do. Ideas? > > Deana > > > Ferdinando Villa wrote: > >> One way I would frame this discussion, thinking about the comment about >> "visual modeling and analysis language" and the whole UI issue, is that >> we need to start a synthesis (top-down) effort aimed at understanding >> what's the language that shapes an ecologist's thinking when they >> approach a problem, and characterize its relationship with the two >> conceptual frameworks we've been concentrating on so far: the KR >> framework and the workflow framework (in their abstract nature, before >> going down to OWL and Ptolemy, and WITHOUT one thought to any pretty >> screenshot!). The exercise should highlight whether we need to (a) have >> enough of one - maybe slightly extended - and infer the other, (b) find >> something that sits in the middle, or (c) find something totally >> different. This done, we should be able to easily define the visual >> language that most closely embodies it. >> >> Back to personal opinions, I'll just add that it's my belief that this >> process, although it needs very open minds, doesn't necessarily have to >> be very long and very hard, and I think we have all the pieces in place >> to quickly prototype the right UI (as opposed to the "advanced" one!) >> when the idea is clear, without having to distance ourselves much from >> things as they stand now... >> >> ferd >> >> On Mon, 2004-06-14 at 08:56, Rod Spears wrote: >> >> >>> In many ways I think the current user-interface work for Kepler is >>> almost orthoginal to this discussion. >>> >>> There are many issues with the current UI that need to be fixed >>> ASAP, but I don't think it should keep us from getting a group >>> together to start down the path that Shawn has outlined. >>> >>> If we (and we should) take a more process oriented approach to >>> developing the UI this work really has little, if anything, to do >>> with Kepler for quite sometime. >>> >>> As I see it the Kepler UI is really the "advanced" UI for SEEK. >>> There is a whole lot of work that needs to go on before that. >>> >>> Deana has a very valid point as to how to begin this work >>> with/without the usability position being filled. At the same time, >>> many different aspects of the UI are being to take shape and time is >>> of the essence. >>> >>> Rod >>> >>> >>> Deana Pennington wrote: >>> >>> >>> >>>> Shawn & Rod, >>>> >>>> I think these are all great suggestions, and we've been discussimg >>>> putting together a group of ecologists for a couple of days of >>>> testing, but: >>>> >>>> 1) we thought that there are some major issues with the interface >>>> as it stands right now that need to be fixed before we try to get a >>>> group together, and >>>> 2) a decision needs to made about the useability engineer position, >>>> so that person can be involved right from the start in user testing >>>> and UI design >>>> >>>> So, I think we should table this discussion until the above 2 >>>> things are resolved. It's obvious that this needs to be addressed >>>> soon. >>>> >>>> Deana >>>> >>>> >>>> Shawn Bowers wrote: >>>> >>>> >>>> >>>>> Rod Spears wrote: >>>>> >>>>> >>>>> >>>>>> (This is a general reply to the entire thread that is on >>>>>> seek-kr-sms): >>>>>> >>>>>> In the end, there are really two very simple questions about what >>>>>> we are all doing on SEEK: >>>>>> >>>>>> 1) Can we make it work? >>>>>> a) This begs the question of "how" to make it work. >>>>>> >>>>>> 2) Will anybody use it? >>>>>> a) This begs the question of "can" anybody use it? >>>>>> >>>>>> Shawn is right when he says we are coming at this from the >>>>>> "bottom-up." SEEK has been very focused on the mechanics of how >>>>>> to take legacy data and modeling techniques and create a new >>>>>> environment to "house" them and better utilize them. In the end, >>>>>> if you can't answer question #1, it does matter whether you can >>>>>> answer question #2. >>>>>> >>>>>> But at the same time I have felt that we have been a little too >>>>>> focused on #1, or at the very least we haven't been spending >>>>>> enough time on question #2. >>>>>> >>>>>> Both Nico and Fernando touched on two very important aspects of >>>>>> what we are talking about. Nico's comment about attacking the >>>>>> problem from "both" ends (top down and bottom up) seems very >>>>>> appropriate. In fact, the more we know about the back-end the >>>>>> better we know what "tools" or functionality we have to develop >>>>>> for the front-end and how best they can interact. >>>>>> >>>>>> Fernando's comment touches on the core of what concerns me the >>>>>> most, and it is the realization of question #2 >>>>>> His comment: "/I also think that the major impediment to an >>>>>> understanding that requires a paradigm switch is the early >>>>>> idealization of a graphical user interface/." Or more >>>>>> appropriately known as "the seduction of the GUI." (Soon to be a >>>>>> Broadway play ;-) ). >>>>>> >>>>>> We absolutely have to create a tool that scientists can use. So >>>>>> this means we have to create a tool that "engages" the way they >>>>>> think about modeling problems. Note that I used the word >>>>>> "engage", meaning the tool doesn't to be an exact reflection of >>>>>> their process for creating a models and doing analysis, but if >>>>>> has to be close enough to make them want to "step up to the >>>>>> plate" and "take a swing for the fence" as it were. >>>>>> >>>>>> In many ways too, Fernando's comment touch on the the problem I >>>>>> have always had with Kepler. The UI is completely intertwined >>>>>> with the model definition and the analysis specification. It has >>>>>> nearly zero flexibility in how one "views" the "process" of >>>>>> entering in the model. (As a side note, the UI is one of the >>>>>> harder aspects of Kepler to tailor) >>>>>> >>>>>> In a perfect world of time and budgets it would be nice to create >>>>>> a tool that has standalone Modeling and Analysis Definition >>>>>> Language, then a core standalone analysis/simulation engine, and >>>>>> lastly a set of GUI tools that assist the scientists in creating >>>>>> the models and monitoring the execution. Notice how the GUI came >>>>>> last? The GUI needs to be born out of the underlying technology >>>>>> instead of defining it. >>>>>> >>>>>> I am a realist and I understand how much functionality Kepler >>>>>> brings to the table, it gives us such a head start in AMS. Maybe >>>>>> we need to start thinking about a more "conceptual" tool that >>>>>> fits in front of Kelper, but before that we need to really >>>>>> understand how the average scientist would approach the SEEK >>>>>> technology. I'll say this as a joke: "but that pretty much >>>>>> excludes any scientist working on SEEK," but it is true. Never >>>>>> let the folks creating the technology tell you how the technology >>>>>> should be used, that's the responsibility of the user. >>>>>> >>>>>> I know the word "use case" has been thrown around daily as if it >>>>>> were confetti, but I think the time is approaching where we need >>>>>> to really focus on developing some "real" end-user use cases. I >>>>>> think a much bigger effort and emphasis needs to be placed on the >>>>>> "top-down." And some of the ideas presented in this entire thread >>>>>> is a good start. >>>>>> >>>>> >>>>> >>>>> Great synthesis and points Rod. >>>>> >>>>> (Note that I un-cc'd kepler-dev, since this discussion is very >>>>> much seek-specific) >>>>> >>>>> I agree with you, Nico, and Ferdinando that we need top-down >>>>> development (i.e., an understanding of the targeted user problems >>>>> and needs, and how best to address these via end-user interfaces) >>>>> as well as bottom-up development (underlying technology, etc.). >>>>> >>>>> I think that in general, we are at a point in the project where we >>>>> have a good idea of the kinds of solutions we can provide (e.g., >>>>> with EcoGrid, Kepler, SMS, Taxon, and so on). >>>>> >>>>> And, we are beginning to get to the point where we are >>>>> building/needing user interfaces: we are beginning to >>>>> design/implement add-ons to Kepler, e.g., for EcoGrid querying and >>>>> Ontology-enabled actor/dataset browsing; GrOWL is becoming our >>>>> user-interface for ontologies; we are designing a user interface >>>>> for annotating actors and datasets (for datasets, there are also >>>>> UIs such as Morhpo); and working on taxonomic browsing. >>>>> >>>>> I definately think that now in the project is a great time to take >>>>> a step back, and as these interfaces are being designed and >>>>> implemented (as well as the lower-level technology), to be >>>>> informed by real user-needs. >>>>> >>>>> >>>>> Here is what I think needs to be done to do an effective top-down >>>>> design: >>>>> >>>>> 1. Clearly identify our target user group(s) and the general >>>>> benefit we believe SEEK will provide to these groups. In >>>>> particular, who are we developing the "SEEK system" for, and what >>>>> are their problems/needs and constraints. Capture this as a >>>>> report. (As an aside, it will be very hard to evaluate the utility >>>>> of SEEK without understanding who it is meant to help, and how it >>>>> is meant to help them.) >>>>> >>>>> 2. Assemble a representive group of target users. As Rod suggests, >>>>> there should be participants that are independent of SEEK. [I >>>>> attended one meeting that was close to this in Abq in Aug. 2003 -- >>>>> have there been others?] >>>>> >>>>> 3. Identify the needs of the representive group in terms of SEEK. >>>>> These might be best represented as "user stories" (i.e., >>>>> scenarios) initially as opposed to use cases. I think there are >>>>> two types of user stories that are extremely benefitial: (1) as a >>>>> scenario of how some process works now, e.g., the story of a >>>>> scientist that needed to run a niche model; (2) ask the user to >>>>> tell us "how you would like the system to work" for the stories >>>>> from 1. >>>>> >>>>> 4. Synthesize the user stories into a set of target use cases that >>>>> touch a wide range of functionality. Develop and refine the use >>>>> cases. >>>>> >>>>> 5. From the use cases and user constraints, design one or more >>>>> "storyboard" user interfaces, or the needed user interface >>>>> components from the use cases. At this point, there may be >>>>> different possible interfaces, e.g., a high-level ontology based >>>>> interface as suggested by Ferdinando and a low-level Kepler-based >>>>> interface. This is where we need to be creative to address user >>>>> needs. >>>>> >>>>> 6. Get feedback from the target users on the "storyboard" >>>>> interfaces (i.e., let them evaluate the interfaces). Revisit the >>>>> user stories via the storyboards. Refine the second part of 3, and >>>>> iterate 5 and 6. >>>>> >>>>> 7. Develop one or more "prototypes" (i.e., the interface with >>>>> canned functionality). Let the user group play with it, get >>>>> feedback, and iterate. >>>>> >>>>> 8. The result should be "the" user interface. >>>>> >>>>> >>>>> One of the most important parts of this process is to identify the >>>>> desired characteristics of the target users, and to pick a >>>>> representative group of users that can lead to the widest array of >>>>> use-cases/user-stories that are most benefitial to the target users. >>>>> >>>>> For example, we have primarily focused on niche-modeling as the >>>>> use case. (This isn't a great example, but bear with me) If our >>>>> sample user group only consisted of scientists that did niche >>>>> modeling, or if this were our target user group, we would probably >>>>> build a user interface around, and specific to niche modeling >>>>> (i.e., niche modeling should become an integral, and probably >>>>> embedded, part of the interface). Of course, for us, this isn't >>>>> necessarily true because we know we have a more general target >>>>> user group. But, hopefully you get the point. >>>>> >>>>> >>>>> shawn >>>>> >>>>> >>>>> >>>>> >>>>>> Rod >>>>>> >>>>>> >>>>>> Deana Pennington wrote: >>>>>> >>>>>> >>>>>> >>>>>>> In thinking about the Kepler UI, it has occurred to me that it >>>>>>> would really be nice if the ontologies that we construct to >>>>>>> organize the actors into categories, could also be used in a >>>>>>> high-level workflow design phase. For example, in the niche >>>>>>> modeling workflow, GARP, neural networks, GRASP and many other >>>>>>> algorithms could be used for that one step in the workflow. >>>>>>> Those algorithms would all be organized under some high-level >>>>>>> hierarchy ("StatisticalModels"). Another example is the >>>>>>> Pre-sample step, where we are using the GARP pre-sample >>>>>>> algorithm, but other sampling algorithms could be substituted. >>>>>>> There should be a high-level "Sampling" concept, under which >>>>>>> different sampling algorithms would be organized. During the >>>>>>> design phase, the user could construct a workflow based on these >>>>>>> high level concepts (Sampling and StatisticalModel), then bind >>>>>>> an actor (already implemented or using Chad's new actor) in a >>>>>>> particular view of that workflow. So, a workflow would be >>>>>>> designed at a high conceptual level, and have multiple views, >>>>>>> binding different algorithms, and those different views would be >>>>>>> logically linked through the high level workflow. The immediate >>>>>>> case is the GARP workflow we are designing will need another >>>>>>> version for the neural network algorithm, and that version will >>>>>>> be virtually an exact replicate except for that actor. Seems >>>>>>> like it would be better to have one workflow with different >>>>>>> views... >>>>>>> >>>>>>> I hope the above is coherent...in reading it, I'm not sure that >>>>>>> it is :-) >>>>>>> >>>>>>> Deana >>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>> _______________________________________________ >>>>> seek-dev mailing list >>>>> seek-dev at ecoinformatics.org >>>>> http://www.ecoinformatics.org/mailman/listinfo/seek-dev >>>>> >>>> >>>> >>>> >>> >>> _______________________________________________ >>> seek-kr-sms mailing list >>> seek-kr-sms at ecoinformatics.org >>> http://www.ecoinformatics.org/mailman/listinfo/seek-kr-sms >>> >> > From berkley at nceas.ucsb.edu Tue Jun 15 09:18:26 2004 From: berkley at nceas.ucsb.edu (Chad Berkley) Date: Tue, 15 Jun 2004 09:18:26 -0700 Subject: [seek-kr-sms] more list info Message-ID: <40CF2152.70204@nceas.ucsb.edu> Hi, Matt and I chatted yesterday about this moderation problem with the list. We think it's because seek-kr and seek-sms are forwarding to seek-kr-sms and causing the moderation. So could you please make sure you are sending your messages to seek-kr-sms instead of to seek-kr or seek-sms? Hopefully this will fix this problem. chad From ludaesch at sdsc.edu Thu Jun 17 03:48:31 2004 From: ludaesch at sdsc.edu (Bertram Ludaescher) Date: Thu, 17 Jun 2004 03:48:31 -0700 Subject: [seek-kr-sms] growl: owl-dl or owl full?! In-Reply-To: <000001c44ef9$36a283e0$3ca6c684@BTS2K3000D5635086B> References: <000001c44ef9$36a283e0$3ca6c684@BTS2K3000D5635086B> Message-ID: <16593.30463.371000.338596@gargle.gargle.HOWL> Sometimes less can be more (don't try to axiomatize this sentence ;-) Specifically, restricting an editor to allow only certain constructs (such as OWL-DL) can be advantageous since you then can say what you are able to do with the ontologies (here: reasoning about concept inclusion, for example). A stronger language such as OWL-Full (or just FO or Datalog constraints) are also OK, as long as you don't try to use them for reasoning but instead just for constraint checking. Bertram Serguei Krivov writes: > Hi All, > I am working on growl editing and have an urgent design issue: > Should we impose the editing discipline which would allow owl dl > constructs only and do nothing when user tries to make an owl full > construct? Some editors like oil-edit (that works with owl now) are > intolerant to owl full. Personally I think that this is right since > owl-full ontologies are difficult to use. OWLAPI seems also not really > happy to see owl-full constructs, it reports error, however somehow it > processes them. > > Ideally one can have a trigger which switch owl-dl discipline on and > off. But implementing such trigger would increase the editing code may > be 1.6 times comparing to making plain owl-dl discipline. I would leave > this for the future, but you guys may have other suggestions (?) > Please let me know what you think. > serguei > > > > > > > > > > > > > > > > > > >
> >

Hi All,

> >

I am working on growl editing and have an urgent design > issue:

> >

Should we impose the editing discipline which would allow > owl dl constructs only and   style='mso-spacerun:yes'>  style='mso-spacerun:yes'> do nothing when user tries to make an owl > full construct? Some editors like oil-edit (that works with owl now) are > intolerant to owl full. Personally I think that this is right since owl-full class=SpellE>ontologies are difficult to use. OWLAPI seems also not > really happy to see owl-full constructs, it reports > error, however somehow it processes them.

> >

 

> >

Ideally one can have a trigger which > switch owl-dl discipline on and off. But implementing such trigger would > increase the editing code may be 1.6 times comparing to making plain owl-dl > discipline. I would leave this for the future, but you guys may have other > suggestions (?)

> >

Please let me know what you think.

> >

serguei

> >

 

> >
> > > > From ludaesch at sdsc.edu Thu Jun 17 04:07:40 2004 From: ludaesch at sdsc.edu (Bertram Ludaescher) Date: Thu, 17 Jun 2004 04:07:40 -0700 Subject: [seek-kr-sms] growl: owl-dl or owl full?! In-Reply-To: References: <40C897B3.3050001@sdsc.edu> Message-ID: <16593.31612.323000.384304@gargle.gargle.HOWL> Rich Williams writes: > (I want to clarify that OWL is not necessarily stored in XML - the XML-RDF > syntax is just the most commonly chosen syntax. You can store OWL (and RDF) > in much less-verbose, non-XML syntaxes.) yes, very much such so (e.g. in one of the Sparrow variants =B-) > I agree that non-OWL-DL constructs should be avoided. The extreme > flexibility of RDF and OWL-Full will make generic OWL-Full tools extremely > difficult to develop. So far, the main thing that I have wanted to do that > is outside OWL-DL is to have a property that takes a class as its value, > rather than a class instance. This restriction in expressivity leads to > some rather inelegant hacks to work around it and remain in OWL-DL. Another > frequent issue is the lack of value restrictions on datatype properties, but > I don't think that this is available in OWL-Full either. (One solution is > to subtype the xml datatypes to restrict the range of permissable values, > but no tools yet support this). It seems that most of us feel that OWL-DL should be the focus -- so we are converging. As mentioned before, constructs beyond DL might still have some role to play in the future eg when *checking* the consistency of a database instance w.r.t. a set of more ontology constraints (those will in general be too powerful for *reasoning* about them). > While I use Protege, I would not claim that it has anything approaching an > optimal user inteface, and I think that good visualization tools could play > a role for the so-called knowledge engineer (knowledge-model engineer?). > None of the graphical tools that I have experimented with have been better > for me than the Protege tree and dialog-box based user interface, so that's > what I use. > > As far as the GrOWL UI goes, I see no reason why it can't be like the user > interface of many commercial software packages, where both entry-level and > expert users are accomodated. There could be an easily-accessible set of > commonly performed operations (creating subclasses, disjoint, object and > datatype properties, some basic property restrictions etc), and the full > expressivity of OWL-DL could also be available through "advanced" or "more" > buttons in dialogs. I'm wondering whether beyond the UI issues, there are even more severe issues in creating, modeling with, and understanding of ontologies. Good visualizations and UIs can help to comprehend what's going on, but a good grasp of the underlying formalism and the consequence of different modeling choices will be important for "power users/modelers". I see as a major research challenge "consequence management". Let us be reminded that an OWL-DL ontology is a bunch of axioms. The axioms themselves are a means to an end. The semantics of them is the set of models (or minimal models) of the axioms. We need to be getting better at visualizing and querying the logical consequences of axioms (not just the axioms as such). This is a tough one. Visualizing the class hierarchy is well understood and useful , but captures only the (implied) isa relation. A generation of an extended UML or ER diagram capturing some more semantics (properties/roles etc) might be useful as well. Bertram > > Rich > > > > -----Original Message----- > > From: seek-kr-sms-admin at ecoinformatics.org > > [mailto:seek-kr-sms-admin at ecoinformatics.org]On Behalf Of Shawn Bowers > > Sent: Thursday, June 10, 2004 10:18 AM > > To: Serguei Krivov > > Cc: seek-kr-sms at ecoinformatics.org > > Subject: Re: [seek-kr-sms] growl: owl-dl or owl full?! > > > > > > Hi Serguei, > > > > If you look at the Protege data model, they have a language that offers > > similar meta-modeling constructs as found in OWL-Full. > > > > In my opinion, the use of these constructs, unless you really know what > > you are doing, can be confusing and often leads to incomprehensible > > conceptual models. > > > > My general opinion is to not support similar constructs in GrOWL. > > > > But, it isn't clear to me at this point who the target user is of the > > GrOWL onto editing and management tools. If it is scientists and other > > domain experts, I think most of the OWL-DL and even OWL-Lite constructs > > will be too much. For these users, I think we need to be very clear > > about what modeling constructs we want to support (e.g., these > > constructs may be at a "higher" level than OWL-DL constructs), > > explicitly support the needed constructs through visual notations (not > > OWL formulas); then figure out how those constructs are realized by > > OWL-Lite or OWL-DL. Since GrOWL seems to be on track to output OWL > > ontologies, these can be further edited by a knowledge "engineer" if > > needed (to add more constraints). However, if the target user group is > > knowledge engineers, e.g., Rich and the KR group, doesn't Protege > > already offer the necessary interface? > > > > In general, the family of OWL standards are complex, with many modeling > > constructs, and verbose, not only because OWL is stored via XML, but > > also because it is based on RDF. I think there is a definate need for > > ontology tools that do more than just expose OWL or any other DL -- like > > XML, OWL is much better suited as a storage and exchange language, not > > as an interface in and of itself for users. > > > > So, my overall suggestion, would be to figure out the necessary > > constructs for the target user group (which I'd be happy to help with), > > figure out how best to present these to the user (again, I'd be happy to > > help with this), then figure out if it is representable in OWL-Lite, > > OWL-DL (most likely), or OWL-Full (not likely). > > > > > > shawn > > > > Serguei Krivov wrote: > > > Hi All, > > > > > > I am working on growl editing and have an urgent design issue: > > > > > > Should we impose the editing discipline which would allow owl dl > > > constructs only and do nothing when user tries to make an owl full > > > construct? Some editors like oil-edit (that works with owl now) are > > > intolerant to owl full. Personally I think that this is right since > > > owl-full ontologies are difficult to use. OWLAPI seems also not really > > > happy to see owl-full constructs, it reports error, however somehow it > > > processes them. > > > > > > > > > > > > Ideally one can have a trigger which switch owl-dl discipline on and > > > off. But implementing such trigger would increase the editing code may > > > be 1.6 times comparing to making plain owl-dl discipline. I would leave > > > this for the future, but you guys may have other suggestions (?) > > > > > > Please let me know what you think. > > > > > > serguei > > > > > > > > > > > > > _______________________________________________ > > seek-kr-sms mailing list > > seek-kr-sms at ecoinformatics.org > > http://www.ecoinformatics.org/mailman/listinfo/seek-kr-sms > > _______________________________________________ > seek-kr-sms mailing list > seek-kr-sms at ecoinformatics.org > http://www.ecoinformatics.org/mailman/listinfo/seek-kr-sms From Serguei.Krivov at uvm.edu Thu Jun 17 08:29:36 2004 From: Serguei.Krivov at uvm.edu (Serguei Krivov) Date: Thu, 17 Jun 2004 11:29:36 -0400 Subject: [seek-kr-sms] growl: owl-dl or owl full?! In-Reply-To: <16593.30463.371000.338596@gargle.gargle.HOWL> Message-ID: <000c01c4547f$e58f9ab0$3ca6c684@BTS2K3000D5635086B> Sometimes less can be more (don't try to axiomatize this sentence ;-) (exist Y)(exist Y)(less(X)->more(Y)) ;-) serguei From Serguei.Krivov at uvm.edu Thu Jun 17 08:43:39 2004 From: Serguei.Krivov at uvm.edu (Serguei Krivov) Date: Thu, 17 Jun 2004 11:43:39 -0400 Subject: [seek-kr-sms] growl: owl-dl or owl full?! In-Reply-To: <16593.31612.323000.384304@gargle.gargle.HOWL> Message-ID: <000d01c45481$dc3a7730$3ca6c684@BTS2K3000D5635086B> I see as a major research challenge "consequence management". Let us be reminded that an OWL-DL ontology is a bunch of axioms. The axioms themselves are a means to an end. The semantics of them is the set of models (or minimal models) of the axioms. We need to be getting better at visualizing and querying the logical consequences of axioms (not just the axioms as such). This is a tough one. Hi Bertram, From ferdinando.villa at uvm.edu Sat Jun 19 13:57:28 2004 From: ferdinando.villa at uvm.edu (Ferdinando Villa) Date: Sat, 19 Jun 2004 16:57:28 -0400 Subject: [seek-kr-sms] Re: [kepler-dev] Kepler UI- "Boxes and Arrows" In-Reply-To: <40D35B12.8030808@nceas.ucsb.edu> References: <40D35B12.8030808@nceas.ucsb.edu> Message-ID: <1087653839.5941.12.camel@basil.snr.uvm.edu> Hi Dan - one comment only, linking back to the previous discussion in kr-sms: In the web page at http://www.nceas.ucsb.edu/~higgins/BoxesAndArrows, you write this about the Stella version: >A little thought shows that a Stella model is really a graphical >representation of a set of coupled rate equations. The net flows >are just the rates at which stocks increase or decrease. Stella does >a nice job of hiding these mathematical details, while letting the >user determine what concepts determine the amount (stocks) of >various parameters. Note, however, that it would be difficult to use >Stella models for some other types of workflows - e.g. a set of sequential image processing steps applied to some scanned image. Exactly - it would be difficult because Stella is a system dedicated to creating instances of a very limited ontology - that of stocks, flows etc. It's also why the Stella diagram is a lot simpler and more readable (with limitations) than the others - in fact it contains a whole new compartment, vegetation, that doesn't even fit in the kepler window. In fact, Stella is a half-baked conceptual modeling environment. The "image analysis" ontology could create a Stella-like diagram for what you mention. And we can do better than Stella! Also, the strength in Stella is not that the boxes are the "data" as you mention... the strength is that the boxes are the SPECIES and the ECOLOGICAL PROCESSES! That's how ecologists see it, and that's why they like it. Now - we know SEEK is an ontology-driven system... and we want to hide the semantic types in the connections. I think your page shows nicely that we should at least think harder about some kind of conceptual modeling. Cheers, ferdinando On Fri, 2004-06-18 at 17:13, Daniel Higgins wrote: > Hi All, > > The recent Kepler thread discussing all sorts of UI issues got me > thinking about a lot of the issues discussed. In particular, I liked > Bertram's characterization of such visual systems as "Boxes and > Arrows". As a result I put together a personal web page with a number > of example screen shots from various "Box and Arrow" systems (e.g. > Stella, Kepler) to try to show similarities and differences. I also > added some thoughts of my own. If you are interested, see > > http://www.nceas.ucsb.edu/~higgins/BoxesAndArrows/ > > Comments are always appreciated. > > > Dan Higgins > NCEAS -- From ludaesch at sdsc.edu Sat Jun 19 01:06:57 2004 From: ludaesch at sdsc.edu (Bertram Ludaescher) Date: Sat, 19 Jun 2004 01:06:57 -0700 Subject: [kepler-dev] Re: [seek-kr-sms] UI In-Reply-To: <40C8A553.4090101@sdsc.edu> References: <40C629D8.1080107@lternet.edu> <5.1.0.14.2.20040610081445.00bc5f18@mho.eecs.berkeley.edu> <1086887039.40c8947f7e935@mail.lternet.edu> <1086887659.4449.46.camel@basil.snr.uvm.edu> <40C89F92.9080404@sdsc.edu> <1086890946.4449.57.camel@basil.snr.uvm.edu> <40C8A553.4090101@sdsc.edu> Message-ID: <16595.62497.651000.824170@gargle.gargle.HOWL> Part of the SEEK collaboration is IT research. There is no need to "over-authoritize" research -- Just do it ;-) It's all about exchange and refinement of ideas (sometimes competing), improving results etc. Diversity in research is healthy ... Convergence on common visions and approaches is also very helpful if it can be achieved in a collaboration. And when it comes to SEEK operation and development, there is always the project manager to help prioritize =B-) cheers Bertram Shawn Bowers writes: > > I probably don't have the authority to speak about the goals of SEEK > either :-) > > shawn > > > Ferdinando Villa wrote: > > > I like the top-down vs. the bottom-up distinction... Having no > > authorities to speak about the goals of SEEK, I would just add that I > > think there are definite advantages in being able to recognize, e.g., > > the Shannon index as a concrete subclass of the abstract "Ecological > > diversity" that sits in the core SEEK KB, both from the points of view > > of internal sotfware design and of communicating concepts and tools with > > our user base. Being a concrete class, you can use the Shannon index > > semantic type not only to tag a datases, but also to associate an > > implementation that can produce a workflow from a legal instance of it, > > defined in a suitable knowledge modelling environment, and whose > > complexity scales exactly with its conceptual definition - what the > > ecologist knows already, as opposed to the workflow's complexity. Also > > in terms of system maintenance, this characterization can use a more > > limited set of actors and would concentrate on extending the "tools" by > > extending the knowledge base rather than the processing environment. > > That's where I'm going with the IMA (with GrOWL as the modelling > > environment and with a lot of the "bottom-up" tools and concepts I > > learned from you guys) and I'm very much hoping that by > > cross-fertilizing approaches like this, we can find ourselves at the > > point of encounter before SEEK is over! > > > > Ciao > > ferdinando > > > > On Thu, 2004-06-10 at 13:51, Shawn Bowers wrote: > > > >> From my understanding of the goals of SEEK, I think what you describe > >>below Ferdinando is very much the ultimate goal. Although it has never > >>been formally stated, I think we are working "bottom up" to achieve the > >>goal; building the necessary infrastructure so operations as you > >>envision below can be realized. I believe that a driving factor for > >>developing the system bottom up is because we are charged with > >>"enlivening" legacy data and services. > >> > >>To do what you suggest, we need ontologies, we need ontology interfaces > >>and tools (so users can easily pose such questions, and generally work > >>at the ontology level), we need datasets, we need datasets annotated > >>with (or accessible via) ontologies (for retrieving, accessing, and > >>combining the relevant data), we need workflows/analysis steps (that can > >>be executed), and annotations for these as well (so they too can be > >>retrieved and appropriated executed). [Note that in SEEK, we haven't yet > >>proposed to model the calcuation of, e.g., a Shannon index, we have only > >>proposed annotating such processes with instances from an ontology > >>(e.g., that the process computes a biodiversity index), and semantically > >>describing the types of input and output the process consumes and > >>produces (i.e., that it takes proportional abundances, etc., the > >>description acting much like a database schema annotated with ontology > >>defs).] > >> > >>Just as a note, what you suggest is also very much related to planning > >>problems in AI (which obviously have some practical limitations). A > >>benefit of pursuing this bottom-up strategy in SEEK is that we may make > >>things easier for scientists and researchers, even if we cannot achieve > >>(e.g., either computationally or in a reasonable amount of time) the > >>scenario you describe below. > >> > >> > >>shawn > >> > >>Ferdinando Villa wrote: > >> > >> > >>>It may be just semantic nitpicking, but I think what makes things > >>>complex are the names more than the concepts. As long as we model the > >>>Shannon index and not the process that calculates it, things are > >>>extremely simple. Instead of defining the analytical process that > >>>calculates the index, we recognize it as a concept and create an > >>>instance of it. Its definition (in the relevant ontology) guides the > >>>modeling and query process through all the required relationships. Then > >>>SMS/AMS - not the user - creates the "model". Can we envision what is > >>>the ultimate concept that the GARP process calculates? Distribution of a > >>>species? Modeling that concept (using a GARP-aware subclass of the base > >>>ecological concept) will guide the user through the retrieval of > >>>compatible data (incrementally narrowing the space/time context to > >>>search as new required data are added), then create a GARP pipeline and > >>>run it - and modeling a subclass of it that's defined in another way > >>>will create GARP version 2.... > >>> > >>>2 more cents, of an Euro as always.... > >>>ferdinando > >>> > >>>On Thu, 2004-06-10 at 13:03, penningd at lternet.edu wrote: > >>> > >>> > >>>>I think these are all excellent ideas, that we should follow up on. These are > >>>>closely related to the whole UI issue. Ferdinando and I have talked about > >>>>trying to generate a prototype of his IMA approach using the GARP workflow. I > >>>>think we should do the same thing with GME. Or maybe, if we look at them > >>>>together, they are closely linked and could be used together. I think > >>>>"meta-model" really conveys the idea here, and that it is the level at which our > >>>>scientists are most likely to work. Generating a working model from a > >>>>meta-model seems to be the difficult step, but that's where semantically-created > >>>>transformation steps would be extremely useful. > >>>> > >>>>Deana > >>>> > >>>> > >>>>Quoting Edward A Lee : > >>>> > >>>> > >>>> > >>>>>Apologies for my ignorance, but what is "IMA"? > >>>>> > >>>>>On a quick glance (hindered by lack of comprehension of TLA's), > >>>>>these ideas strike me as related to some work we've been doing > >>>>>on meta-modeling together with Vanderbilt University... The notion > >>>>>of meta-modeling is that one constructs models of families of models > >>>>>by specifying constraints on their static structure... Vanderbilt > >>>>>has a tool called GME (generic modeling environment) where a user > >>>>>specifies a meta model for a domain-specific modeling technique, > >>>>>and then GME synthesizes a customized visual editor that enforces > >>>>>those constraints. > >>>>> > >>>>>Conceivably we could build something similar in Kepler, where > >>>>>instead of building workflows, one constructs a meta model of a family > >>>>>of workflows... ? > >>>>> > >>>>>Just some random neuron firing triggered by Ferdinando's thoughts... > >>>>> > >>>>>Edward > >>>>> > >>>>> > >>>>>At 06:44 PM 6/8/2004 -0400, Ferdinando Villa wrote: > >>>>> > >>>>> > >>>>>>Hi Deana, > >>>>>> > >>>>>>I've started thinking along these lines some time ago, on the grounds > >>>>>>that modeling the high-level logical structure (rather than the > >>>>> > >>>>>workflow > >>>>> > >>>>> > >>>>>>with all its inputs, outputs and loops) may be all our typical user > >>>>> > >>>>>is > >>>>> > >>>>> > >>>>>>willing to do. Obviously I'm biased by interacting with my own user > >>>>>>community, but they're probably representative of the wider SEEK user > >>>>>>community. So I fully agree with you here. > >>>>>> > >>>>>>However, I don't think that we can achieve such an high-level > >>>>> > >>>>>paradigm > >>>>> > >>>>> > >>>>>>simply by augmenting the actors specifications. For the IMA I've done > >>>>> > >>>>>a > >>>>> > >>>>> > >>>>>>pretty thorough analysis of the relationship between the logical > >>>>>>structure of a model/pipeline/concept and the workflow that > >>>>> > >>>>>calculates > >>>>> > >>>>> > >>>>>>the states of the final "concept" you're after; as a result of that, > >>>>> > >>>>>I'm > >>>>> > >>>>> > >>>>>>pretty convinced that they don't relate that simply. In Edinburgh > >>>>> > >>>>>(while > >>>>> > >>>>> > >>>>>>not listening to the MyGrid presentation) I wrote down a rough > >>>>>>explanation of what I think in this regard (and what I think that my > >>>>>>work can contribute to SEEK and Kepler), and circulated to a small > >>>>> > >>>>>group > >>>>> > >>>>> > >>>>>>for initial feedback. I attach the document, which needs some > >>>>> > >>>>>patience > >>>>> > >>>>> > >>>>>>on your part. If you can bear with some dense writing with an Italian > >>>>>>accent, I think you'll find similarities with what you propose, and > >>>>> > >>>>>I'd > >>>>> > >>>>> > >>>>>>love to hear what you think. > >>>>>> > >>>>>>Cheers, > >>>>>>ferdinando > >>>>>> > >>>>>>On Tue, 2004-06-08 at 17:04, Deana Pennington wrote: > >>>>>> > >>>>>> > >>>>>>>In thinking about the Kepler UI, it has occurred to me that it > >>>>> > >>>>>would > >>>>> > >>>>> > >>>>>>>really be nice if the ontologies that we construct to organize the > >>>>>>>actors into categories, could also be used in a high-level > >>>>> > >>>>>workflow > >>>>> > >>>>> > >>>>>>>design phase. For example, in the niche modeling workflow, GARP, > >>>>> > >>>>>neural > >>>>> > >>>>> > >>>>>>>networks, GRASP and many other algorithms could be used for that > >>>>> > >>>>>one > >>>>> > >>>>> > >>>>>>>step in the workflow. Those algorithms would all be organized > >>>>> > >>>>>under > >>>>> > >>>>> > >>>>>>>some high-level hierarchy ("StatisticalModels"). Another example is > >>>>> > >>>>>the > >>>>> > >>>>> > >>>>>>>Pre-sample step, where we are using the GARP pre-sample algorithm, > >>>>> > >>>>>but > >>>>> > >>>>> > >>>>>>>other sampling algorithms could be substituted. There should be a > >>>>>>>high-level "Sampling" concept, under which different sampling > >>>>> > >>>>>algorithms > >>>>> > >>>>> > >>>>>>>would be organized. During the design phase, the user could > >>>>> > >>>>>construct a > >>>>> > >>>>> > >>>>>>>workflow based on these high level concepts (Sampling and > >>>>>>>StatisticalModel), then bind an actor (already implemented or > >>>>> > >>>>>using > >>>>> > >>>>> > >>>>>>>Chad's new actor) in a particular view of that workflow. So, a > >>>>>>>workflow would be designed at a high conceptual level, and have > >>>>> > >>>>>multiple > >>>>> > >>>>> > >>>>>>>views, binding different algorithms, and those different views would > >>>>> > >>>>>be > >>>>> > >>>>> > >>>>>>>logically linked through the high level workflow. The immediate > >>>>> > >>>>>case is > >>>>> > >>>>> > >>>>>>>the GARP workflow we are designing will need another version for > >>>>> > >>>>>the > >>>>> > >>>>> > >>>>>>>neural network algorithm, and that version will be virtually an > >>>>> > >>>>>exact > >>>>> > >>>>> > >>>>>>>replicate except for that actor. Seems like it would be better to > >>>>> > >>>>>have > >>>>> > >>>>> > >>>>>>>one workflow with different views... > >>>>>>> > >>>>>>>I hope the above is coherent...in reading it, I'm not sure that it > >>>>> > >>>>>is :-) > >>>>> > >>>>> > >>>>>>>Deana > >>>>>>> > >>>>>> > >>>>>>-- > >>>>> > >>>>>------------ > >>>>>Edward A. Lee, Professor > >>>>>518 Cory Hall, UC Berkeley, Berkeley, CA 94720 > >>>>>phone: 510-642-0455, fax: 510-642-2739 > >>>>>eal at eecs.Berkeley.EDU, http://ptolemy.eecs.berkeley.edu/~eal > >>>>> > >>>>>_______________________________________________ > >>>>>seek-kr-sms mailing list > >>>>>seek-kr-sms at ecoinformatics.org > >>>>>http://www.ecoinformatics.org/mailman/listinfo/seek-kr-sms > >>>>> > >> > >>_______________________________________________ > >>kepler-dev mailing list > >>kepler-dev at ecoinformatics.org > >>http://www.ecoinformatics.org/mailman/listinfo/kepler-dev > > _______________________________________________ > kepler-dev mailing list > kepler-dev at ecoinformatics.org > http://www.ecoinformatics.org/mailman/listinfo/kepler-dev From Serguei.Krivov at uvm.edu Mon Jun 28 07:47:26 2004 From: Serguei.Krivov at uvm.edu (Serguei Krivov) Date: Mon, 28 Jun 2004 10:47:26 -0400 Subject: [seek-kr-sms] GrOWL -metadata panel design. Message-ID: <000001c45d1e$d47a6690$3ca6c684@BTS2K3000D5635086B> Hi All, We are getting close to graphic editing support for main owl constructs- this was a difficult part. Now we can also think about easy things like icons for editing toolbox and editing metadata for ontologies. The later point however brings certain hard questions. Metadata pane should contain comments, author information and namespaces-imported, default etc. If authors info and comments could easily go into one more side panel, it may be not so easy with namespaces. There are at least 3 options: #1 Put namespaces on side panels. Pros -easy to make, cons- namespaces will be wrapped since they need long field #2 Put namespaces on popup dialog. Pros- easy to make, cons -not aesthetically appealing #3 Create tab panel. One tab contains present view , another contains metadata view. Pros- aesthetically appealing, cons - perhaps may require more work. Would you please vote or give your suggestions. Any better option (#4, #5,.) Thanks, Serguei -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mercury.nceas.ucsb.edu/ecoinformatics/pipermail/seek-kr-sms/attachments/20040628/312b24af/attachment.htm From bowers at sdsc.edu Mon Jun 28 09:05:26 2004 From: bowers at sdsc.edu (Shawn Bowers) Date: Mon, 28 Jun 2004 09:05:26 -0700 Subject: [seek-kr-sms] GrOWL -metadata panel design. In-Reply-To: <000001c45d1e$d47a6690$3ca6c684@BTS2K3000D5635086B> References: <000001c45d1e$d47a6690$3ca6c684@BTS2K3000D5635086B> Message-ID: <40E041C6.2010805@sdsc.edu> I am not entirely sure what you are asking. If you could craft up some screen shots of what you are thinking that would really help. My general opinion is that namespace names are useful (if people organize ontologies around these), namespaces (as URIs) for people to view are not and shouldn't be integrated into the graphical interface. If someone really wants to see them, then I would suggest using a different window (for "expert" mode) that lists namespace names and their corresponding URI. Also, how do you see people using namespaces in your editor? How are you going to present them to a user in terms of modeling constructs? E.g., are you going to view each namespace as an "ontology"? Are there operations you are planning on supporting over namespaces (like get all concepts from a namespace)? Shawn Serguei Krivov wrote: > Hi All, > > We are getting close to graphic editing support for main owl constructs- > this was a difficult part. Now we can also think about easy things like > icons for editing toolbox and editing metadata for ontologies. The > later point however brings certain hard questions. Metadata pane should > contain comments, author information and namespaces-imported, default > etc. If authors info and comments could easily go into one more side > panel, it may be not so easy with namespaces. There are at least 3 options: > > > > #1 Put namespaces on side panels. Pros ?easy to make, cons- namespaces > will be wrapped since they need long field > > #2 Put namespaces on popup dialog. Pros- easy to make, cons ?not > aesthetically appealing > > #3 Create tab panel. One tab contains present view , another contains > metadata view. Pros- aesthetically appealing, cons ? perhaps may require > more work. > > > > Would you please vote or give your suggestions. Any better option (#4, #5,?) > > > > Thanks, > > Serguei > > > > > > > From rwilliams at nceas.ucsb.edu Mon Jun 28 09:49:12 2004 From: rwilliams at nceas.ucsb.edu (Rich Williams) Date: Mon, 28 Jun 2004 09:49:12 -0700 Subject: [seek-kr-sms] GrOWL -metadata panel design. In-Reply-To: <000001c45d1e$d47a6690$3ca6c684@BTS2K3000D5635086B> Message-ID: The ontology metadata is something that tends to be set up once and then rarely changed. Here are my first impressions of the options you listed: #1: Putting the metadata in a side-panel doesn't seem like the right thing - I don't think you ever need to see it alongside the graph, and as you point out the short fields would be a problem. #2: Protege uses the tabbed panel approach and it works quite well. There, the metadata panel is one panel in a potentially large set of views of the ontology (class hierarchy, properties, instances, various visualization plugins etc.). One issue is that GrOWL has a very different design, where different views of the ontology are all displayed in the same pane, and there are buttons that toggle between the different views. #3: I don't find the use of a dialog for this kind of infrequently used user interface element particularly aesthetically unappealing. One issue is to make sure that the button or menu item that invokes the dialog is clearly named etc. so that the dialog is easy to find. Overall, I'd be ok with either #2 or #3, leaning slightly to #3. On the positive side, the window design will be basically the same whether in a dialog or a tabbed panel. It is quite easy to set up a tabbed panel, so I don't think you'd be getting into much extra work if you choose that option. Rich -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mercury.nceas.ucsb.edu/ecoinformatics/pipermail/seek-kr-sms/attachments/20040628/d05ddbfa/attachment.htm From Serguei.Krivov at uvm.edu Mon Jun 28 10:29:05 2004 From: Serguei.Krivov at uvm.edu (Serguei Krivov) Date: Mon, 28 Jun 2004 13:29:05 -0400 Subject: [seek-kr-sms] GrOWL -metadata panel design. In-Reply-To: <40E041C6.2010805@sdsc.edu> Message-ID: <000201c45d35$69760a90$3ca6c684@BTS2K3000D5635086B> SB>I am not entirely sure what you are asking. If you could craft up some screen shots of what you are thinking that would really help. Hi Shawn, The tabbed pane approach is the one used by Oiledit and Prot?g?. See metadata and namespace panes. The side pane approach can be thought of as merely an extension, or alternative right side pane for existing design: http://ecoinformatics.uvm.edu/dmaps/growl/EcologicalConcepts.html The dialog approach can be similar to prot?g? metadata pane except it pops up when button is pressed. SB>My general opinion is that namespace names are useful (if people organize ontologies around these), namespaces (as URIs) for people to view are not and shouldn't be integrated into the graphical interface. They are not part of interface in most of cases. But if you use imports, then you may like to distinguish the ontologies from where concept came by prefix that can be some abbreviation. SB>If someone really wants to see them, then I would suggest using a different window (for "expert" mode) that lists namespace names and their corresponding URI. Yes, that is the idea. The question is how to integrate this ("expert") window in existing growl. Should it be on the side pane, on separate popup dilog, or made as a tab as in oiledit??? Or is there is a better way? Thanks, Serguei From jones at nceas.ucsb.edu Mon Jun 28 12:27:43 2004 From: jones at nceas.ucsb.edu (Matt Jones) Date: Mon, 28 Jun 2004 11:27:43 -0800 Subject: [seek-kr-sms] GrOWL -metadata panel design. In-Reply-To: <40E041C6.2010805@sdsc.edu> References: <000001c45d1e$d47a6690$3ca6c684@BTS2K3000D5635086B> <40E041C6.2010805@sdsc.edu> Message-ID: <40E0712F.4080005@nceas.ucsb.edu> I have to agree with Shawn -- our end users really should not be exposed to technical details such as namespace URIs and similar constructs at all. If unique namespace URI's are needed, Growl should be generating and tracking them for the user. They'll have a hard enough time wrestling with just the ecological concepts in a formal manner. So I would add #4: #4: No namespaces exposed in GUI whatsoever Matt Shawn Bowers wrote: > > I am not entirely sure what you are asking. If you could craft up some > screen shots of what you are thinking that would really help. > > My general opinion is that namespace names are useful (if people > organize ontologies around these), namespaces (as URIs) for people to > view are not and shouldn't be integrated into the graphical interface. > If someone really wants to see them, then I would suggest using a > different window (for "expert" mode) that lists namespace names and > their corresponding URI. > > Also, how do you see people using namespaces in your editor? How are you > going to present them to a user in terms of modeling constructs? E.g., > are you going to view each namespace as an "ontology"? Are there > operations you are planning on supporting over namespaces (like get all > concepts from a namespace)? > > > Shawn > > > Serguei Krivov wrote: > >> Hi All, >> >> We are getting close to graphic editing support for main owl >> constructs- this was a difficult part. Now we can also think about >> easy things like icons for editing toolbox and editing metadata for >> ontologies. The later point however brings certain hard questions. >> Metadata pane should contain comments, author information and >> namespaces-imported, default etc. If authors info and comments could >> easily go into one more side panel, it may be not so easy with >> namespaces. There are at least 3 options: >> >> >> >> #1 Put namespaces on side panels. Pros ?easy to make, cons- namespaces >> will be wrapped since they need long field >> >> #2 Put namespaces on popup dialog. Pros- easy to make, cons ?not >> aesthetically appealing >> >> #3 Create tab panel. One tab contains present view , another contains >> metadata view. Pros- aesthetically appealing, cons ? perhaps may >> require more work. >> >> >> >> Would you please vote or give your suggestions. Any better option (#4, >> #5,?) >> >> >> >> Thanks, >> >> Serguei >> >> >> >> >> >> >> > > _______________________________________________ > seek-kr-sms mailing list > seek-kr-sms at ecoinformatics.org > http://www.ecoinformatics.org/mailman/listinfo/seek-kr-sms -- ------------------------------------------------------------------- Matt Jones jones at nceas.ucsb.edu http://www.nceas.ucsb.edu/ Fax: 425-920-2439 Ph: 907-789-0496 National Center for Ecological Analysis and Synthesis (NCEAS) University of California Santa Barbara Interested in ecological informatics? http://www.ecoinformatics.org ------------------------------------------------------------------- From rich at sfsu.edu Mon Jun 28 14:44:59 2004 From: rich at sfsu.edu (Rich Williams) Date: Mon, 28 Jun 2004 14:44:59 -0700 Subject: [seek-kr-sms] GrOWL -metadata panel design. In-Reply-To: <40E0712F.4080005@nceas.ucsb.edu> Message-ID: While I appreciate the desire for simplicity, I don't think the issue of namespaces can be entirely avoided, though it can and should largely be buried and hidden from the average user. I think the issues are more present when editing an ontology than when browsing, but even when browsing, namespace prefixes might be necessary to accurately disambiguate concept names. To completely suppress namespace prefixes is essentially adding a more stringent naming requirement than currently exists in OWL. It would require that names be globally unique, rather than just unique within a namespace. If GrOWL is to be a general purpose ontology browsing and editing tool, this requirement cannot be enforced, and so some level of namespace exposure is inevitable. Of course it shouldn't be something staring the first-time user in the face, but I think it does need to be dealt with. Remember also that Serguei was asking about the UI for ontology metadata in general, not just about the UI for imports and namespaces. Even if namespaces are not exposed, there still needs to be a UI (panel/dialog/tab) for this other metadata. Rich -----Original Message----- From: seek-kr-sms-admin at ecoinformatics.org [mailto:seek-kr-sms-admin at ecoinformatics.org]On Behalf Of Matt Jones Sent: Monday, June 28, 2004 12:28 PM To: Shawn Bowers Cc: Serguei Krivov; seek-kr-sms at ecoinformatics.org Subject: Re: [seek-kr-sms] GrOWL -metadata panel design. I have to agree with Shawn -- our end users really should not be exposed to technical details such as namespace URIs and similar constructs at all. If unique namespace URI's are needed, Growl should be generating and tracking them for the user. They'll have a hard enough time wrestling with just the ecological concepts in a formal manner. So I would add #4: #4: No namespaces exposed in GUI whatsoever Matt Shawn Bowers wrote: > > I am not entirely sure what you are asking. If you could craft up some > screen shots of what you are thinking that would really help. > > My general opinion is that namespace names are useful (if people > organize ontologies around these), namespaces (as URIs) for people to > view are not and shouldn't be integrated into the graphical interface. > If someone really wants to see them, then I would suggest using a > different window (for "expert" mode) that lists namespace names and > their corresponding URI. > > Also, how do you see people using namespaces in your editor? How are you > going to present them to a user in terms of modeling constructs? E.g., > are you going to view each namespace as an "ontology"? Are there > operations you are planning on supporting over namespaces (like get all > concepts from a namespace)? > > > Shawn > > > Serguei Krivov wrote: > >> Hi All, >> >> We are getting close to graphic editing support for main owl >> constructs- this was a difficult part. Now we can also think about >> easy things like icons for editing toolbox and editing metadata for >> ontologies. The later point however brings certain hard questions. >> Metadata pane should contain comments, author information and >> namespaces-imported, default etc. If authors info and comments could >> easily go into one more side panel, it may be not so easy with >> namespaces. There are at least 3 options: >> >> >> >> #1 Put namespaces on side panels. Pros ?easy to make, cons- namespaces >> will be wrapped since they need long field >> >> #2 Put namespaces on popup dialog. Pros- easy to make, cons ?not >> aesthetically appealing >> >> #3 Create tab panel. One tab contains present view , another contains >> metadata view. Pros- aesthetically appealing, cons ? perhaps may >> require more work. >> >> >> >> Would you please vote or give your suggestions. Any better option (#4, >> #5,?) >> >> >> >> Thanks, >> >> Serguei >> >> >> >> >> >> >> > > _______________________________________________ > seek-kr-sms mailing list > seek-kr-sms at ecoinformatics.org > http://www.ecoinformatics.org/mailman/listinfo/seek-kr-sms -- ------------------------------------------------------------------- Matt Jones jones at nceas.ucsb.edu http://www.nceas.ucsb.edu/ Fax: 425-920-2439 Ph: 907-789-0496 National Center for Ecological Analysis and Synthesis (NCEAS) University of California Santa Barbara Interested in ecological informatics? http://www.ecoinformatics.org ------------------------------------------------------------------- _______________________________________________ seek-kr-sms mailing list seek-kr-sms at ecoinformatics.org http://www.ecoinformatics.org/mailman/listinfo/seek-kr-sms From ludaesch at sdsc.edu Wed Jun 30 04:40:58 2004 From: ludaesch at sdsc.edu (Bertram Ludaescher) Date: Wed, 30 Jun 2004 04:40:58 -0700 Subject: [seek-kr-sms] GrOWL -metadata panel design. In-Reply-To: References: <40E0712F.4080005@nceas.ucsb.edu> Message-ID: <16610.42698.456000.352881@gargle.gargle.HOWL> I don't know how easy/meaningful this is, but maybe there can be a switch "ns on/off". Bertram Rich Williams writes: > While I appreciate the desire for simplicity, I don't think the issue of > namespaces can be entirely avoided, though it can and should largely be > buried and hidden from the average user. I think the issues are more > present when editing an ontology than when browsing, but even when browsing, > namespace prefixes might be necessary to accurately disambiguate concept > names. To completely suppress namespace prefixes is essentially adding a > more stringent naming requirement than currently exists in OWL. It would > require that names be globally unique, rather than just unique within a > namespace. If GrOWL is to be a general purpose ontology browsing and > editing tool, this requirement cannot be enforced, and so some level of > namespace exposure is inevitable. Of course it shouldn't be something > staring the first-time user in the face, but I think it does need to be > dealt with. > > Remember also that Serguei was asking about the UI for ontology metadata in > general, not just about the UI for imports and namespaces. Even if > namespaces are not exposed, there still needs to be a UI (panel/dialog/tab) > for this other metadata. > > Rich > > -----Original Message----- > From: seek-kr-sms-admin at ecoinformatics.org > [mailto:seek-kr-sms-admin at ecoinformatics.org]On Behalf Of Matt Jones > Sent: Monday, June 28, 2004 12:28 PM > To: Shawn Bowers > Cc: Serguei Krivov; seek-kr-sms at ecoinformatics.org > Subject: Re: [seek-kr-sms] GrOWL -metadata panel design. > > > I have to agree with Shawn -- our end users really should not be exposed > to technical details such as namespace URIs and similar constructs at > all. If unique namespace URI's are needed, Growl should be generating > and tracking them for the user. They'll have a hard enough time > wrestling with just the ecological concepts in a formal manner. So I > would add #4: > > #4: No namespaces exposed in GUI whatsoever > > Matt > > Shawn Bowers wrote: > > > > I am not entirely sure what you are asking. If you could craft up some > > screen shots of what you are thinking that would really help. > > > > My general opinion is that namespace names are useful (if people > > organize ontologies around these), namespaces (as URIs) for people to > > view are not and shouldn't be integrated into the graphical interface. > > If someone really wants to see them, then I would suggest using a > > different window (for "expert" mode) that lists namespace names and > > their corresponding URI. > > > > Also, how do you see people using namespaces in your editor? How are you > > going to present them to a user in terms of modeling constructs? E.g., > > are you going to view each namespace as an "ontology"? Are there > > operations you are planning on supporting over namespaces (like get all > > concepts from a namespace)? > > > > > > Shawn > > > > > > Serguei Krivov wrote: > > > >> Hi All, > >> > >> We are getting close to graphic editing support for main owl > >> constructs- this was a difficult part. Now we can also think about > >> easy things like icons for editing toolbox and editing metadata for > >> ontologies. The later point however brings certain hard questions. > >> Metadata pane should contain comments, author information and > >> namespaces-imported, default etc. If authors info and comments could > >> easily go into one more side panel, it may be not so easy with > >> namespaces. There are at least 3 options: > >> > >> > >> > >> #1 Put namespaces on side panels. Pros ?easy to make, cons- namespaces > >> will be wrapped since they need long field > >> > >> #2 Put namespaces on popup dialog. Pros- easy to make, cons ?not > >> aesthetically appealing > >> > >> #3 Create tab panel. One tab contains present view , another contains > >> metadata view. Pros- aesthetically appealing, cons ? perhaps may > >> require more work. > >> > >> > >> > >> Would you please vote or give your suggestions. Any better option (#4, > >> #5, ) > >> > >> > >> > >> Thanks, > >> > >> Serguei > >> > >> > >> > >> > >> > >> > >> > > > > _______________________________________________ > > seek-kr-sms mailing list > > seek-kr-sms at ecoinformatics.org > > http://www.ecoinformatics.org/mailman/listinfo/seek-kr-sms > > -- > ------------------------------------------------------------------- > Matt Jones jones at nceas.ucsb.edu > http://www.nceas.ucsb.edu/ Fax: 425-920-2439 Ph: 907-789-0496 > National Center for Ecological Analysis and Synthesis (NCEAS) > University of California Santa Barbara > Interested in ecological informatics? http://www.ecoinformatics.org > ------------------------------------------------------------------- > _______________________________________________ > seek-kr-sms mailing list > seek-kr-sms at ecoinformatics.org > http://www.ecoinformatics.org/mailman/listinfo/seek-kr-sms > > _______________________________________________ > seek-kr-sms mailing list > seek-kr-sms at ecoinformatics.org > http://www.ecoinformatics.org/mailman/listinfo/seek-kr-sms From Serguei.Krivov at uvm.edu Wed Jun 30 08:08:21 2004 From: Serguei.Krivov at uvm.edu (Serguei Krivov) Date: Wed, 30 Jun 2004 11:08:21 -0400 Subject: [seek-kr-sms] GrOWL -metadata panel design. In-Reply-To: <40E0712F.4080005@nceas.ucsb.edu> Message-ID: <000101c45eb4$15200240$3ca6c684@BTS2K3000D5635086B> So I would add #4: #4: No namespaces exposed in GUI whatsoever Matt Apparently, we have two different questions: 1. About exposing namespaces of say classes and relations during browsing and editing. We may hide them ,which would work fine when there is no any imported ontology. If there is an imported ontology then one maight like to know from where definitions came. But certainly, looking at the long URI may frighten anyone, including myself. If we decide that at this point we do not support imported ontologies then #4 is just fine. If we want import, then the question is- how we can specify node namespaces without messing up with repugnant URIs? 2. About specifying global namespaces which go to the heading of ontology specification. There are many standard namespaces that could be generated automatically, such as: xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#" xmlns:owl=http://www.w3.org/2002/07/owl# But how about specifying the base namespace, such as : xml:base="http://wow.sfsu.edu/ontology/rich/Resources.owl" We should also provide some metadata which probably can be in one place with base namespace. If we would like to avoid seeing these technicalities all the time, then perhaps tab pane (#3) is not right thing. Then should it be done in a dialog which pops up before user saves ontology, or separate popup dialog, or something else? Thanks, Serguei Shawn Bowers wrote: > > I am not entirely sure what you are asking. If you could craft up some > screen shots of what you are thinking that would really help. > > My general opinion is that namespace names are useful (if people > organize ontologies around these), namespaces (as URIs) for people to > view are not and shouldn't be integrated into the graphical interface. > If someone really wants to see them, then I would suggest using a > different window (for "expert" mode) that lists namespace names and > their corresponding URI. > > Also, how do you see people using namespaces in your editor? How are you > going to present them to a user in terms of modeling constructs? E.g., > are you going to view each namespace as an "ontology"? Are there > operations you are planning on supporting over namespaces (like get all > concepts from a namespace)? > > > Shawn > > > Serguei Krivov wrote: > >> Hi All, >> >> We are getting close to graphic editing support for main owl >> constructs- this was a difficult part. Now we can also think about >> easy things like icons for editing toolbox and editing metadata for >> ontologies. The later point however brings certain hard questions. >> Metadata pane should contain comments, author information and >> namespaces-imported, default etc. If authors info and comments could >> easily go into one more side panel, it may be not so easy with >> namespaces. There are at least 3 options: >> >> >> >> #1 Put namespaces on side panels. Pros -easy to make, cons- namespaces >> will be wrapped since they need long field >> >> #2 Put namespaces on popup dialog. Pros- easy to make, cons -not >> aesthetically appealing >> >> #3 Create tab panel. One tab contains present view , another contains >> metadata view. Pros- aesthetically appealing, cons - perhaps may >> require more work. >> >> >> >> Would you please vote or give your suggestions. Any better option (#4, >> #5,.) >> >> >> >> Thanks, >> >> Serguei >> >> >> >> >> >> >> > > _______________________________________________ > seek-kr-sms mailing list > seek-kr-sms at ecoinformatics.org > http://www.ecoinformatics.org/mailman/listinfo/seek-kr-sms -- ------------------------------------------------------------------- Matt Jones jones at nceas.ucsb.edu http://www.nceas.ucsb.edu/ Fax: 425-920-2439 Ph: 907-789-0496 National Center for Ecological Analysis and Synthesis (NCEAS) University of California Santa Barbara Interested in ecological informatics? http://www.ecoinformatics.org ------------------------------------------------------------------- _______________________________________________ seek-kr-sms mailing list seek-kr-sms at ecoinformatics.org http://www.ecoinformatics.org/mailman/listinfo/seek-kr-sms From ferdinando.villa at uvm.edu Wed Jun 30 08:28:25 2004 From: ferdinando.villa at uvm.edu (Ferdinando Villa) Date: Wed, 30 Jun 2004 11:28:25 -0400 Subject: [seek-kr-sms] GrOWL -metadata panel design. In-Reply-To: <000101c45eb4$15200240$3ca6c684@BTS2K3000D5635086B> References: <000101c45eb4$15200240$3ca6c684@BTS2K3000D5635086B> Message-ID: <1088609305.4456.16.camel@basil.snr.uvm.edu> Please DO support imported ontologies. They're necessary to grasp the meaning of what you're looking at. We don't want to be bound to creating ontologies that only embody a fully self-consistent concept space. And speaking of this, what about some non-tech terminology to clarify things for the user? In the IMA I call "concept space" the namespace id. We can find a better name, but if we had a little status bar with the current "Concept space: xxxx" name always highlighted, changing as you switch to imported ontologies, and we make sure that the namespace id means something coherent and understandable, I think it would make sense to the user - certainly better than saying "namespace" anywhere. It's a syntax vs. semantics issue. Of course the URI would be in a normally hidden info pane or dialog as Serguei suggests. And of course I would definitely hide stuff belonging to core namespaces such as rdfs or owl (we could configure these in). ferdinando On Wed, 2004-06-30 at 11:08, Serguei Krivov wrote: > So I would add #4: > #4: No namespaces exposed in GUI whatsoever > Matt > > > Apparently, we have two different questions: > 1. About exposing namespaces of say classes and relations during > browsing and editing. We may hide them ,which would work fine when there > is no any imported ontology. If there is an imported ontology then one > maight like to know from where definitions came. But certainly, looking > at the long URI may frighten anyone, including myself. > If we decide that at this point we do not support imported ontologies > then #4 is just fine. If we want import, then the question is- how we > can specify node namespaces without messing up with repugnant URIs? > > 2. About specifying global namespaces which go to the heading of > ontology specification. There are many standard namespaces that could be > generated automatically, such as: > xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" > xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#" > xmlns:owl=http://www.w3.org/2002/07/owl# > > But how about specifying the base namespace, such as : > xml:base="http://wow.sfsu.edu/ontology/rich/Resources.owl" > > We should also provide some metadata which probably can be in one place > with base namespace. If we would like to avoid seeing these > technicalities all the time, then perhaps tab pane (#3) is not right > thing. Then should it be done in a dialog which pops up before user > saves ontology, or separate popup dialog, or something else? > > Thanks, > Serguei > > > > Shawn Bowers wrote: > > > > I am not entirely sure what you are asking. If you could craft up some > > > screen shots of what you are thinking that would really help. > > > > My general opinion is that namespace names are useful (if people > > organize ontologies around these), namespaces (as URIs) for people to > > view are not and shouldn't be integrated into the graphical interface. > > > If someone really wants to see them, then I would suggest using a > > different window (for "expert" mode) that lists namespace names and > > their corresponding URI. > > > > Also, how do you see people using namespaces in your editor? How are > you > > going to present them to a user in terms of modeling constructs? E.g., > > > are you going to view each namespace as an "ontology"? Are there > > operations you are planning on supporting over namespaces (like get > all > > concepts from a namespace)? > > > > > > Shawn > > > > > > Serguei Krivov wrote: > > > >> Hi All, > >> > >> We are getting close to graphic editing support for main owl > >> constructs- this was a difficult part. Now we can also think about > >> easy things like icons for editing toolbox and editing metadata for > >> ontologies. The later point however brings certain hard questions. > >> Metadata pane should contain comments, author information and > >> namespaces-imported, default etc. If authors info and comments could > >> easily go into one more side panel, it may be not so easy with > >> namespaces. There are at least 3 options: > >> > >> > >> > >> #1 Put namespaces on side panels. Pros -easy to make, cons- > namespaces > >> will be wrapped since they need long field > >> > >> #2 Put namespaces on popup dialog. Pros- easy to make, cons -not > >> aesthetically appealing > >> > >> #3 Create tab panel. One tab contains present view , another contains > > >> metadata view. Pros- aesthetically appealing, cons - perhaps may > >> require more work. > >> > >> > >> > >> Would you please vote or give your suggestions. Any better option > (#4, > >> #5,.) > >> > >> > >> > >> Thanks, > >> > >> Serguei > >> > >> > >> > >> > >> > >> > >> > > > > _______________________________________________ > > seek-kr-sms mailing list > > seek-kr-sms at ecoinformatics.org > > http://www.ecoinformatics.org/mailman/listinfo/seek-kr-sms -- Ferdinando Villa, Ph.D., Associate Research Professor, Ecoinformatics Gund Institute for Ecological Economics and Dept of Botany, Univ. of Vermont http://ecoinformatics.uvm.edu From Serguei.Krivov at uvm.edu Wed Jun 30 08:44:11 2004 From: Serguei.Krivov at uvm.edu (Serguei Krivov) Date: Wed, 30 Jun 2004 11:44:11 -0400 Subject: [seek-kr-sms] GrOWL -metadata panel design. In-Reply-To: <1088609305.4456.16.camel@basil.snr.uvm.edu> Message-ID: <000901c45eb9$16a9e450$3ca6c684@BTS2K3000D5635086B> OK, as I understand the idea is to give meaningful names to the base and other imported ontologies. Then let user see these names on the property pane (instead of current uri base). We would then specify real uri in some undercover dialog box , where we also establish their mapping to simple names. There will be some default mapping used during loading ontologies. So we have #5: simple IDs for namespaces visible/ selectable in property panes + technical details and metadata in a popup dialog. #5 is on the top! Any other bids!? serguei ------------------------------------------------------------------------ -------------- Serguei Krivov, Assist. Research Professor, Computer Science Dept. & Gund Inst. for Ecological Economics, University of Vermont; 590 Main St. Burlington VT 05405 phone: (802)-656-2978 -----Original Message----- From: seek-kr-sms-admin at ecoinformatics.org [mailto:seek-kr-sms-admin at ecoinformatics.org] On Behalf Of Ferdinando Villa Sent: Wednesday, June 30, 2004 11:28 AM To: Serguei Krivov Cc: 'Matt Jones'; 'Shawn Bowers'; seek-kr-sms at ecoinformatics.org Subject: RE: [seek-kr-sms] GrOWL -metadata panel design. Please DO support imported ontologies. They're necessary to grasp the meaning of what you're looking at. We don't want to be bound to creating ontologies that only embody a fully self-consistent concept space. And speaking of this, what about some non-tech terminology to clarify things for the user? In the IMA I call "concept space" the namespace id. We can find a better name, but if we had a little status bar with the current "Concept space: xxxx" name always highlighted, changing as you switch to imported ontologies, and we make sure that the namespace id means something coherent and understandable, I think it would make sense to the user - certainly better than saying "namespace" anywhere. It's a syntax vs. semantics issue. Of course the URI would be in a normally hidden info pane or dialog as Serguei suggests. And of course I would definitely hide stuff belonging to core namespaces such as rdfs or owl (we could configure these in). ferdinando On Wed, 2004-06-30 at 11:08, Serguei Krivov wrote: > So I would add #4: > #4: No namespaces exposed in GUI whatsoever > Matt > > > Apparently, we have two different questions: > 1. About exposing namespaces of say classes and relations during > browsing and editing. We may hide them ,which would work fine when there > is no any imported ontology. If there is an imported ontology then one > maight like to know from where definitions came. But certainly, looking > at the long URI may frighten anyone, including myself. > If we decide that at this point we do not support imported ontologies > then #4 is just fine. If we want import, then the question is- how we > can specify node namespaces without messing up with repugnant URIs? > > 2. About specifying global namespaces which go to the heading of > ontology specification. There are many standard namespaces that could be > generated automatically, such as: > xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" > xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#" > xmlns:owl=http://www.w3.org/2002/07/owl# > > But how about specifying the base namespace, such as : > xml:base="http://wow.sfsu.edu/ontology/rich/Resources.owl" > > We should also provide some metadata which probably can be in one place > with base namespace. If we would like to avoid seeing these > technicalities all the time, then perhaps tab pane (#3) is not right > thing. Then should it be done in a dialog which pops up before user > saves ontology, or separate popup dialog, or something else? > > Thanks, > Serguei > > > > Shawn Bowers wrote: > > > > I am not entirely sure what you are asking. If you could craft up some > > > screen shots of what you are thinking that would really help. > > > > My general opinion is that namespace names are useful (if people > > organize ontologies around these), namespaces (as URIs) for people to > > view are not and shouldn't be integrated into the graphical interface. > > > If someone really wants to see them, then I would suggest using a > > different window (for "expert" mode) that lists namespace names and > > their corresponding URI. > > > > Also, how do you see people using namespaces in your editor? How are > you > > going to present them to a user in terms of modeling constructs? E.g., > > > are you going to view each namespace as an "ontology"? Are there > > operations you are planning on supporting over namespaces (like get > all > > concepts from a namespace)? > > > > > > Shawn > > > > > > Serguei Krivov wrote: > > > >> Hi All, > >> > >> We are getting close to graphic editing support for main owl > >> constructs- this was a difficult part. Now we can also think about > >> easy things like icons for editing toolbox and editing metadata for > >> ontologies. The later point however brings certain hard questions. > >> Metadata pane should contain comments, author information and > >> namespaces-imported, default etc. If authors info and comments could > >> easily go into one more side panel, it may be not so easy with > >> namespaces. There are at least 3 options: > >> > >> > >> > >> #1 Put namespaces on side panels. Pros -easy to make, cons- > namespaces > >> will be wrapped since they need long field > >> > >> #2 Put namespaces on popup dialog. Pros- easy to make, cons -not > >> aesthetically appealing > >> > >> #3 Create tab panel. One tab contains present view , another contains > > >> metadata view. Pros- aesthetically appealing, cons - perhaps may > >> require more work. > >> > >> > >> > >> Would you please vote or give your suggestions. Any better option > (#4, > >> #5,.) > >> > >> > >> > >> Thanks, > >> > >> Serguei > >> > >> > >> > >> > >> > >> > >> > > > > _______________________________________________ > > seek-kr-sms mailing list > > seek-kr-sms at ecoinformatics.org > > http://www.ecoinformatics.org/mailman/listinfo/seek-kr-sms -- Ferdinando Villa, Ph.D., Associate Research Professor, Ecoinformatics Gund Institute for Ecological Economics and Dept of Botany, Univ. of Vermont http://ecoinformatics.uvm.edu _______________________________________________ seek-kr-sms mailing list seek-kr-sms at ecoinformatics.org http://www.ecoinformatics.org/mailman/listinfo/seek-kr-sms From jones at nceas.ucsb.edu Wed Jun 30 10:09:06 2004 From: jones at nceas.ucsb.edu (Matt Jones) Date: Wed, 30 Jun 2004 09:09:06 -0800 Subject: [seek-kr-sms] GrOWL -metadata panel design. In-Reply-To: <000901c45eb9$16a9e450$3ca6c684@BTS2K3000D5635086B> References: <000901c45eb9$16a9e450$3ca6c684@BTS2K3000D5635086B> Message-ID: <40E2F3B2.2050507@nceas.ucsb.edu> I think this is a good approach. We definitely need to be able to import. I was just arguing that the ugly specifics of namespace syntax and terminology should be higgen from the user. I like Ferdinando's suggestion to make it a simple label that changes as you browse to new namespaces. In many ways this is a more readable version of the namespace prefix. And I also agree that the actually underlying namespace URI needs to be settable because this is an editor, not just a browser. However, its not clear to me that our scientists would ever choose reasonable namespace URIs, so we might consider whether they can be autogenerated as needed when creating new ontologies. They are really just arbitrary IDs, and as long as they are unique URIs they should be fine. Maybe we could strive to use LSIDs for the namespace URIs for consistency with the direction proposed in Edinburgh for SMS and EcoGrid? Matt Serguei Krivov wrote: > OK, as I understand the idea is to give meaningful names to the base and > other imported ontologies. Then let user see these names on the property > pane (instead of current uri base). We would then specify real uri in > some undercover dialog box , where we also establish their mapping to > simple names. There will be some default mapping used during loading > ontologies. > > So we have #5: simple IDs for namespaces visible/ selectable in property > panes + technical details and metadata in a popup dialog. > > #5 is on the top! > Any other bids!? > > serguei > > ------------------------------------------------------------------------ > -------------- > Serguei Krivov, Assist. Research Professor, > Computer Science Dept. & Gund Inst. for Ecological Economics, > University of Vermont; 590 Main St. Burlington VT 05405 > phone: (802)-656-2978 > > > -----Original Message----- > From: seek-kr-sms-admin at ecoinformatics.org > [mailto:seek-kr-sms-admin at ecoinformatics.org] On Behalf Of Ferdinando > Villa > Sent: Wednesday, June 30, 2004 11:28 AM > To: Serguei Krivov > Cc: 'Matt Jones'; 'Shawn Bowers'; seek-kr-sms at ecoinformatics.org > Subject: RE: [seek-kr-sms] GrOWL -metadata panel design. > > Please DO support imported ontologies. They're necessary to grasp the > meaning of what you're looking at. We don't want to be bound to creating > ontologies that only embody a fully self-consistent concept space. > > And speaking of this, what about some non-tech terminology to clarify > things for the user? In the IMA I call "concept space" the namespace id. > We can find a better name, but if we had a little status bar with the > current "Concept space: xxxx" name always highlighted, changing as you > switch to imported ontologies, and we make sure that the namespace id > means something coherent and understandable, I think it would make sense > to the user - certainly better than saying "namespace" anywhere. It's a > syntax vs. semantics issue. Of course the URI would be in a normally > hidden info pane or dialog as Serguei suggests. And of course I would > definitely hide stuff belonging to core namespaces such as rdfs or owl > (we could configure these in). > > ferdinando > > > On Wed, 2004-06-30 at 11:08, Serguei Krivov wrote: > >>So I would add #4: >>#4: No namespaces exposed in GUI whatsoever >>Matt >> >> >>Apparently, we have two different questions: >> 1. About exposing namespaces of say classes and relations during >>browsing and editing. We may hide them ,which would work fine when > > there > >>is no any imported ontology. If there is an imported ontology then one >>maight like to know from where definitions came. But certainly, > > looking > >>at the long URI may frighten anyone, including myself. >>If we decide that at this point we do not support imported ontologies >>then #4 is just fine. If we want import, then the question is- how we >>can specify node namespaces without messing up with repugnant URIs? >> >>2. About specifying global namespaces which go to the heading of >>ontology specification. There are many standard namespaces that could > > be > >>generated automatically, such as: >>xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" >>xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#" >>xmlns:owl=http://www.w3.org/2002/07/owl# >> >>But how about specifying the base namespace, such as : >>xml:base="http://wow.sfsu.edu/ontology/rich/Resources.owl" >> >>We should also provide some metadata which probably can be in one > > place > >>with base namespace. If we would like to avoid seeing these >>technicalities all the time, then perhaps tab pane (#3) is not right >>thing. Then should it be done in a dialog which pops up before user >>saves ontology, or separate popup dialog, or something else? >> >>Thanks, >>Serguei >> >> >> >>Shawn Bowers wrote: >> >>>I am not entirely sure what you are asking. If you could craft up > > some > >>>screen shots of what you are thinking that would really help. >>> >>>My general opinion is that namespace names are useful (if people >>>organize ontologies around these), namespaces (as URIs) for people > > to > >>>view are not and shouldn't be integrated into the graphical > > interface. > >>>If someone really wants to see them, then I would suggest using a >>>different window (for "expert" mode) that lists namespace names and >>>their corresponding URI. >>> >>>Also, how do you see people using namespaces in your editor? How are >> >>you >> >>>going to present them to a user in terms of modeling constructs? > > E.g., > >>>are you going to view each namespace as an "ontology"? Are there >>>operations you are planning on supporting over namespaces (like get >> >>all >> >>>concepts from a namespace)? >>> >>> >>>Shawn >>> >>> >>>Serguei Krivov wrote: >>> >>> >>>>Hi All, >>>> >>>>We are getting close to graphic editing support for main owl >>>>constructs- this was a difficult part. Now we can also think about >>>>easy things like icons for editing toolbox and editing metadata > > for > >>>>ontologies. The later point however brings certain hard questions. > > >>>>Metadata pane should contain comments, author information and >>>>namespaces-imported, default etc. If authors info and comments > > could > >>>>easily go into one more side panel, it may be not so easy with >>>>namespaces. There are at least 3 options: >>>> >>>> >>>> >>>>#1 Put namespaces on side panels. Pros -easy to make, cons- >> >>namespaces >> >>>>will be wrapped since they need long field >>>> >>>>#2 Put namespaces on popup dialog. Pros- easy to make, cons -not >>>>aesthetically appealing >>>> >>>>#3 Create tab panel. One tab contains present view , another > > contains > >>>>metadata view. Pros- aesthetically appealing, cons - perhaps may >>>>require more work. >>>> >>>> >>>> >>>>Would you please vote or give your suggestions. Any better option >> >>(#4, >> >>>>#5,.) >>>> >>>> >>>> >>>>Thanks, >>>> >>>>Serguei >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>> >>>_______________________________________________ >>>seek-kr-sms mailing list >>>seek-kr-sms at ecoinformatics.org >>>http://www.ecoinformatics.org/mailman/listinfo/seek-kr-sms -- ------------------------------------------------------------------- Matt Jones jones at nceas.ucsb.edu http://www.nceas.ucsb.edu/ Fax: 425-920-2439 Ph: 907-789-0496 National Center for Ecological Analysis and Synthesis (NCEAS) University of California Santa Barbara Interested in ecological informatics? http://www.ecoinformatics.org -------------------------------------------------------------------