To Pat / Ontac Dev - (01)
[this email was written earlier but soehow got stuck in my outbox...] (02)
I have been catching up with emails recently --- First, I want to
acknowledge and thank you Pat for this summary email document to
ONTAC-WG as I will use it as my ongoing reference. I reviewed point (2)
in particular which led me to a renewed review of Kuhn. For this reason
I would like to re-iterate a reminder of Joseph Goguen's work on
Institutions (and refer to Chris Menzel's reference to that). I also
would like to lay out for consideration a viewpoint I got from Kuhn and
suggestions on how this relates to the lattice of theories, and this
finally to practical matters at hand. (03)
Kuhn argues and thus asserts that Theory defines the meaning of a
vocabulary --- that the vocabulary is within the theory as an intrinsic
element and not outside of it. By extension, if every vocabulary is a
part of theory, then there can exist no meta-vocabulary that can exist
independently of any theory and therefore, inter-vocabulary
interoperability is impossible. What we need is to provide for getting
interoperability is morphisms between theories and to treat ontologies
as theories over a logic (whose elements are the rules/axioms/models)---
this is in essence what I learned from our work on analogical
reasoning. Theories each provide their own filter or interpretation or
viewpoint of the world and therefore, what is needed is a lattice of
theories to express sufficient breadth, depth and scope over a
world-view to enable interoperaility via morphisms over these theories.
I hope someone out there is understanding what I am trying to
communicate --- I believe it is critical for us to concern ourselves
with the relationship between a vocabulary and the mathematical
description of an ontology as a theory over a logic and interoperability
as morphisms. I am significantly challenged in finding ways to
communicate this to folks at large, primarily, I assume, due to my own
peculiar background/context. So for those who may concur, any help is
welcomed. (04)
In practical matters and to suggest a direction and source of a
beginning top ontology for this, I suggest we consider that Category
Theory provide us with the immediately usable language to write the
beginnings of a language for first-order manipulation of theories and
models (see my note to Leo Obrst) and that concurrently with this, we
carry out an execute your (Pat's) suggestions to actively "do" and build
ontologies --- without the active building, we will not converge into a
solution from the a formal and tacit perspectives. (05)
I suggest for consideration the proposal that a theory without
empiricism (the active doing) will lead to lack of applications whiles
empiricism without theory might lead to short-lived value. Hence I
suggest a combined approach for consideration that is equally
contributed to by participants. Projects in various interoperability
arenas are interesting to review (for those in that), however,
committing a-priori to one choice of theory may lead to a stove-piped
vision of interoperability (like CORBA provided along with its legacy of
vendor lock-ins) and that negates the large scaled interoperability
where systems today need to interoperate against the unforeseen changes
from evolving future requirements of the selfsame systems (ie. to fight
the future). (06)
I am unfortunately too close and too biased to my own pet-theories to be
of value in suggesting any broad-strokes beyond what I have made my best
efforts herein to suggest and therefore hope that someone may find
resonance with the ideas herein to lead them forward, or, if dissonant,
to point me to some resource or mention where my thinking may be
flawed. I would be willing to be a good support or lend a hand in
writing out design/code details (ex. CL version of something, for
example, like Prolog to help with tools etc... ). (07)
Thanks, (08)
Arun (09)
Cassidy, Patrick J. wrote: (010)
>To the COSMO-WG
> I am sending this note to the COSMO-WG to avoid overloading the
>mailboxes of those who have not expressed an interest in the technical
>issues of the upper ontology. As soon as any kind of consensus on any
>issue is reached, the general list can be informed and others invited
>to comment and add to the discussion.
>
>This message is rather long, as I am attempting to address messages
>from John Sowa, Barry Smith, Matthew West, Gary Berg-Cross, Mike
>Gruninger, Hans Teijgeler, and Ken Ewell, in the threads "The world may
>fundamentally be inexplicable" and "framework approaches designed to
>support for interoperable systems"
>At the end of this I will request once again that the effort of the
>COSMO-WG focus immediately on the actual construction of the Common
>Semantic Model, or a Unified Framework, or the "core ontology", in very
>specific and concrete terms, i.e. an actual ontology that we can begin
>to test for its suitability to purpose with programs that can perform
>logical inference. I do not believe that further high-level discussion
>will advance our understanding of the issues beyond what we now know,
>until we have had a chance to see how the members evaluate very
>specific structures - classes and relations - that could be part of
>such a core/COSMO/UF. We can then relate the abstract principles to
>their expression in the specific ontology structures that the computers
>will work with.
>I think that once we start focusing on the specific structures of the
>COSMO that are agreeable to most of us, the task will turn out to be a
>lot easier than these abstract discussions have made it sound.
>
>The following topics are discussed;
>(1) What is the COSMO intended to be?
>(2) How would the COSMO enable interoperability?
>(3) Will a bare taxonomy serve as a COSMO?
>(4) Will the COSMO itself be a Lattice of Theories?
>(5) ISO15926
>(6) How much acceptance is needed for a useful COSMO?
>(7) What ontology language?
>(8) Would the COSMO be an Executable Specification?
>(9) Do we need more preliminary discussion?
>(10) So where do we start?
> =============== 8 pages to go ===========================
>
>(1) What is the COSMO intended to be?
> One of the major motivations for organizing this working group is
>expressed as one of the goals of the ONTACWG in its charter thus:
>
>" Maintain, as a community, a common upper ontology and a set of
>contexts and mid-level ontologies which will provide a mechanism for
>resolution of questions as to which concepts in which classifications
>are: identical to; different from but consistent with; or logically
>incompatible with, those of other classifications."
>
>What I had in mind when I wrote that was that accurate interoperability
>of programs using semantic inference will most efficiently be ensured
>by having a high-level ontology that will have enough fundamental
>concepts - classes, relations, functions, rules (including axioms), and
>some associated instances - to serve as the "conceptual defining
>vocabulary" that will be sufficient to specify the meaning of concepts
>(or terms) in any domain knowledge classification, by combination and
>extension of the basic concepts. What this means is that if two
>ontology designers want their systems to be interoperable, they only
>need to use the same set of fundamental concepts in their logical
>specifications, and when they want to communicate, share any newly
>defined concepts that are not already in the COSMO (or perhaps in some
>extension used in common).
>
>What it means to "use the same set of fundamental concepts" is that:
> 1.1 Every class is linked by the subclass relation to some class in
>the COSMO; and
> 1.2 Every newly-defined relation or function, is axiomatized using
>only terms already in the COSMO.
> 1.3 Likewise, new axioms use only terms either in the COSMO or whose
>meaning is specified as in 1.1 and 1.2.
> 1.4 Of course, a term that is itself specified by 1.1 and 1.2 and 1.3
>can be considered as an extension of the COSMO, and used in further
>specifications.
>
>This mechanism is crudely analogous to the use of a fixed number of
>"defining" words by lexicographers in some dictionaries (e.g. about
>2000 words are used to define the 65,000 words of the LDOCE). Those
>defining words suffice to specify the meanings of all other words in
>the dictionary. For an ontology, the specifications will be with
>concepts, relations, and axioms, but will also use a limited number of
>base concepts - perhaps 4000 to 10,000.
>
>It can be confidently anticipated that the base set of concepts in the
>COSMO adequate to specify all ontologies linked to it at any one time
>will expand as new domains are linked and new knowledge is discovered.
>But we will know at any given time what set of concepts is sufficient
>to specify the meanings of all of the linked domain ontologies, and
>therefore to enable interoperability of those systems.
>
>More specific extensions to the COSMO can be created, containing
>concepts common to several domains, and can serve as "mid-level
>ontologies" not part of the COSMO itself, but also of utility in
>promoting interoperability of domain ontologies.
>
>The question of knowing when a newly-specified concept is sufficiently
>"primitive" to be included as a new concept in the COSMO itself is an
>interesting and non-obvious one. It has several aspects, and is
>probably worth a detailed discussion in itself. But at this point I do
>not think it necessary to divert attention to this question from the
>more pressing and immediate issues that need to be solved first.
>
>(2) How would the COSMO promote interoperability?
> "Semantic Interoperability" as I understand it is the ability of two
>independent programs with reasoning capability to arrive at the same
>conclusions from the same data.
>
>The COSMO, used as a "conceptual defining vocabulary" as described in
>(1) above, would enable such interoperability in this fashion:
>
> 2.1 Any two systems that desire to interoperate must share the
>definitions ("specifications") of their new local concepts and the data
>(instances) on which their reasoning is to be performed (this is not
>necessarily the whole of either ontology).
> 2.2 Because they are specified using the same base concepts, these
>concepts and data can be automatically merged by a merger program which
>will be able to recognize and merge duplicated concepts, and permit
>each program to interpret (reason with) the concepts in the other.
>This will be possible to do accurately and automatically only because
>they have used the same basic set of defining concepts in the COSMO,
>which are understood by both systems. The merger creates a single
>ontology used by both systems. That is the source of interoperability.
>Such a merger may be reused, or used only once.
> 2.3 it is possible that the merger of the two system ontologies will
>create logically contradictory concepts. This may in some cases be
>recognized when a specific instance of a real-world object (e.g. "The
>Eiffel Tower") is specified as an instance of incompatible categories
>(e.g. FixedStructure and TouringEvent). The merger system will warn
>the system operators on both ends of the conflict - with the result
>depending on policies on both ends. It may happen that unrecognized
>incompatibilities will occur. The frequency of such things happening
>will not be predictable without experience in use of such a mechanism.
>Knowing how that can be avoided will also require direct experience.
>
>(3) Will a bare taxonomy serve as a COSMO?
> The conceptual defining vocabulary must be at least as expressive as
>the systems whose concepts will be specified by that set of concepts.
>If any significant inference is to be supported, a bare taxonomy cannot
>serve. If the systems to interoperate use first-order axioms, the
>COSMO must include first-order axioms. More specifically, every
>relation and function must have some axioms (at least by inheritance
>from parent relations) that specify the meanings - logical inferences -
>that derive from the relation holding between two instances (or
>classes).
>
> If the community of users of the COSMO cannot agree on the meaning
>of a class or relation, as expressed both in the documentation and in
>the axioms logically specifying the common understanding of the
>meaning, then the concept will be fatally ambiguous, and doomed to be
>used in different senses by different users, and will not serve to
>promote the kind of interoperability described in (1) and (2) above.
>Classes or relations which cannot be at least minimally described by
>relations or axioms that are acceptable to the great majority of users
>should not be in the COSMO. This would, I think, in practice conform
>with John Sowa's view of the "core" that "those axioms should not make
>any commitments that would conflict with any reasonable scientific or
>engineering principles or techniques.". We may differ in our gut
>instincts as to how many axioms will prove to be thus broadly
>acceptable, but I again strongly urge that the acceptability of axioms
>be determined by proposing and discussing them specifically.
>
>(4) Will the COSMO itself be a Lattice of Theories?
> The difficulty of discussing a "lattice of Theories" in the
>abstract, rather than just building an ontology and deciding where
>there may be logical incompatibilities, lies in this: without
>describing a specific ontology structure, it is not clear whether a
>context, belief system, hypothesis, hypothetical world or other
>situation that may breed apparent differences is in fact logically
>incompatible, or is better treated as a context that is treated
>computationally by a different mechanism from the way a logical
>incompatibility would be treated. And in fact there may be very little
>difference between the way logical incompatibility and different
>contexts (e.g. different time intervals) are treated. Although John
>Sowa's lattice of theories make perfect sense from a logical point of
>view, we do not in fact know whether what we need in the COSMO will be
>such a lattice, or have a different logical structure.
>
>
>This is a point on which it is critically important that we discuss
>very specific cases rather than generalities. I am quite agnostic on
>whether and to what extent true logically incompatible assertions will
>be desirable in the COSMO. But in ten years of asking, in various
>ontology fora, for examples of truly logically incompatible ontology
>structures desired by different individuals, the only case I have seen
>thus far is the question of whether a time point is or is not identical
>to a zero-length time interval. This is very simply accommodated in
>the COSMO by having different classes for the time interval and time
>point, and not specifying whether these two concepts are equal or not.
>There will be instead a translation axiom:
> (=>
> (instance ?TP TimePoint)
> (exists (?TI ?TILOC)
> (and
> (instance ?TI TimeInterval)
> (instance ?TILOC TimePoint)
> (hasBeginning ?TI ?TILOC)
> (hasEnding ?TI ?TILOC)
> (equal ?TILOC ?TP))))
>
> . . . which will allow those who want to treat a zero-length time
>interval as a time point to do so for practical purposes, without
>actually asserting it logically.
>
>There are other cases that have been described as "logically
>incompatible" such as the 3-D and 4-D representation of objects, but as
>far as I can tell from the logical descriptions I have seen, these are
>simply different though related concepts that can reside happily in the
>same ontology without logical contradiction, with translation
>mechanisms between them (as with the time point/interval), though
>probably the translations will be more complex. A problem with
>deciding precisely how to handle the translation is that we need
>logical specifications of 4-D in the same FOL language that is used for
>3-D and up to now I have not yet seen the full specification. Matthew
>West and Hans Teijgeler have provided some additional detail beyond
>what is on the Web regarding how the ISO15926 ontology is to be
>interpreted. I have not yet fully grasped the relations between that
>representation and the typical 3-D representation, and will want to
>learn more about ISO15926 (more about that below).
>
>Now it is of course possible for one ornery ontologist to assert that a
>zero-length time interval is equal to the time point (equal ?TP ?TI),
>and another ornery ontologist to assert the opposite (not (equal ?TP
>?TI)). These two ontologies would then have a logical incompatibility.
>We cannot prevent potential users from doing such contentious and
>pointless (pun?) things. But there is no good reason to do so, and
>even this potential "logical incompatibility" should present no problem
>at all for those who actually want to use a COSMO to interoperate.
>
>Anyone who thinks that there are truly logically incompatible concepts
>that some might want to have included in the COSMO should tell us which
>concepts those are, so we can determine if they are truly logically
>contradictory. Being very specific is critical to be able to discuss
>this issue in any constructive way. Those cases can be discussed
>individually.
>
>It may well develop that different domain ontologies develop logically
>incompatible representations, and there will be genuinely incompatible
>assertions in different domain ontologies. So there does need to be
>some mechanism - perhaps a "lattice of theories" which will be required
>to handle such cases. There will in any case be a need to handle
>contexts, and the computational mechanisms for handling contexts and
>logical incompatibilities may not be very different.
>
>In this respect, the case of the Cyc ontology and its experience with
>microtheories would probably be relevant. Unfortunately, we do not
>have a lot of detail, but in a side discussion with John Cabral at
>Cycorp, he described the use of microtheories in a way that sounded as
>though they are used as a mechanism for very general specifications of
>contexts. As for logical inconsistency in Cyc, he said:
>
>"Everyone here's attitude is that inconsistency is bad and it is
>avoided. Some people might take the microtheory mechanism to be a
>safety net to allow for sloppy work, but that's no-one's attitude
>here."
>
>So, once again, if anyone believes that there are logically
>inconsistent theories that should be in the COSMO, please describe them
>in sufficient detail to allow us to determine their logical relations
>and decide how they might be handled.
>
>
>(5) ISO15926
>
> For me one of the highlights of the thread "The world may
>fundamentally be inexplicable" was the exchange between Barry Smith and
>Matthew West discussing specifics of ISO15926, from which I learned new
>things. Thanks!
>
>One difficulty with ISO 15926 is that we in ONTACWG are constrained to
>work with only those publicly available knowledge classifications that
>can be examined and used freely. Only part of ISO15926 is available on
>the open Web. But Matthew West and Hans Teijgeler have been providing
>additional information which will help us determine how that ontology
>can be related to the others we are interested in.
>
>I think that there is considerable value in finding a relation between
>the 4-D representation of objects and the 3-D representation, and I
>hope these discussions can continue until we arrive at an agreement on
>how to specify those relations. No one need be required to use one or
>the other, but if we can find an accurate translation of one view to
>the other it will be possible to take advantage of each wherever it is
>most useful, and preserve interoperability with the alternative view.
>
>These discussion probably should for a separate thread in itself. But
>we should have anchor points for the base classes of ISO15926 in the
>first version of the COSMO. I have added only a few ISO15926 concepts
>to the TopLevel merged ontology:
> http://colab.cim3.net/cgi-bin/wiki.pl?CosmoWG/TopLevel
> . . . and will want to add more as soon as we can determine where they
>fit in.
>
>(6) How much acceptance is needed for a useful COSMO?
> To be useful in promoting interoperability, the COSMO will not need
>universal acceptance, but does need to be accepted by a community large
>enough to encourage potential users to explore its utility for their
>purposes, and to encourage academic and commercial groups to develop
>utilities to make it easier to use. I think the ONTACWG itself may be
>large enough for that purpose, but it will certainly be helpful to
>become accepted by a wider user base.
>
>(7) What ontology language?
> Since it will be important for every relation in the COSMO to have
>its logical consequences specified by some axiom(s), the minimum
>expressiveness of the base language used in the COSMO needs to be first
>order, and probably quasi-second order (i.e. quantification over
>predicates and use of function terms). This is the level of
>expressiveness used in OpenCyc, SUMO, and DOLCE, KIF, some other
>implementation of SCL, or perhaps the developing IKL may be adopted as
>the canonical representation. A tool like Protege can be useful, but
>the Protege environment itself will not provide (at present) the
>reasoning mechanisms that can implement first-order reasoning.
> However, this does not mean that systems that use the COSMO must
>themselves use the axioms in the COSMO, in order to take advantage of
>it as a means to enable interoperability. As mentioned above, the
>COSMO has to be at least as expressive as the Knowledge Classification
>Systems that use it as their defining vocabulary. But the dependent
>user systems can have less expressive reasoning.
>
> We can envision several levels of semantic expressiveness for
>versions of the COSMO, that are related to each other by an
>"acceptance" relation. A version in a less expressive language (e.g.
>OWL) can conform to the FOL representation in the sense that every
>relation, axiom (implied or explicit), and inference available in the
>OWL version is also available in the FOL version - there are no logical
>contradictions. But the FOL version will have axioms not available in
>the OWL version, and additional inferences beyond those discoverable in
>the OWL version may be found within the FOL version. The linguistic
>explanations of the meanings of the concepts should be the same in all
>versions. But some part of the meaning will be formalized in the FOL
>version that is not formalized in the OWL version. Those who use the
>OWL version need not use the more detailed axioms used in the FOL
>version, but they will nevertheless be interoperable, with this
>understanding: those who use the OWL version will know that there are
>levels of meaning (described in the comments to each concept) that are
>not expressed in the OWL formalism, but the users still "accept" the
>more complete definition and the more expressive meaning as being what
>they actually **intend** for the concept to mean. Thus users of less
>expressive logical formalisms can agree on complex nuances of meaning
>even if their formalism does not make such distinctions. They can do
>this by identifying their concepts with the more completely specified
>concepts in the FOL COSMO. What happens then, when an OWL system has
>to interoperate with a FOL system, is that either they mutually decide
>to use only DL reasoning, or mutually decide to use FOL reasoning, and
>inferences will be richer or less complex depending on the reasoning
>agreed on for interoperability purposes. But they will be the same for
>both systems, if they use the same level of expressiveness.
>
>At a lower expressiveness level, a taxonomy or thesaurus can also
>identify their terms with the carefully specified concepts in the
>COSMO. If the time comes that their knowledge representations need to
>be used for logical inference, the more detailed meanings "accepted" by
>the less expressive taxonomies can be used. But perhaps a more common
>use of identifying taxonomy or thesaurus terms with COSMO terms will be
>to permit sharing of taxonomic document classifications for search
>purposes.
>
>(8) Could the COSMO be an Executable Specification?
> The COSMO itself would be a conceptual defining vocabulary, and
>would not be an executable specification. But if the interoperating
>ontologies used modal logic or executable methods, it might be
>necessary to be able to represent such structures in the COSMO. The
>first version will not have capability beyond FOL. Whether adequately
>broad agreement could be achieved on a more expressive COSMO will have
>to be determined after the first version is tested.
> Rather than try to increase the expressiveness of the COSMO beyond
>FOL, it may be necessary (or just better) to leave it as a basic
>"introductory ontology" that can be easily mastered and for which there
>will be many sample applications to help people learn how to use it
>quickly and effectively. For more complex reasoning tasks, a
>COSMO-conformant ontology might be importable into a more expressive
>ontology, in the way that an OWL ontology could be imported into the
>COSMO. In what may be an ideal situation, the COSMO might be a
>"COmpatible Subset of Multiple Ontologies", logically compatible with
>several upper ontologies of greater expressiveness. One possibility
>that is being discussed is whether it might be possible to find a
>compatible subset of OpenCyc, SUMO, and DOLCE. If that is possible,
>such a compatible subset could serve as the COSMO to enable
>interoperability.
>
>8.1 Ken Ewell asked:
>
>
>>>>In order for any two or more systems to interoperate successfully
>>>>
>>>>
>they must implicitly or explicitly agree on the ground rules of such an
>
>interaction (among them: methods of command-response or collaboration,
>interoperation or cooperation of processes, composition and division of
>
>processes, interactive control, certain types, etc.).
>
>
>
>>>>Irrespective of the objects interacting-- How can anyone ever be
>>>>
>>>>
>certain of an outcome that depends on the unfolding interaction while
>taking the ground rules (grundegesetz) of the interaction for granted,
>or worse, leaving them for others to fix or modify at will?
>
>[PC] The logic of interactions of entities in any program that will
>interoperate conceptually with another program must be specified in the
>ontologies of those programs, which must be either identical or related
>to each other by some common defining ontology. If programmers in
>different locations change the intended meanings of the concepts by
>procedural code, the interoperation will indeed be imperfect. All of
>the rules - groundrules, first-floor rules, etc., for program object
>interaction must be specified in the ontology. The programmers in such
>a model-driven system can freely change the inputs and output
>interfaces by procedural code without mucking up the ontology or its
>logic. But data interaction and processing rules must be in the
>ontology. This may require that the ontology have a mechanism for
>including procedural specifications, and the ontology, with perhaps an
>associated sequence specification (main function) may be viewed as an
>executable specification. The COSMO may never itself be an executable
>specification, but more expressive ontologies that are compatible with
>the COSMO may serve this function. I think this is an issue that we
>will not be able to examine in any analytical manner until we have
>agreement on the basic structure of the COSMO.
>
>But "Semantic Interoperability" merely means that the concept meanings
>used are the same. there are other aspects of interoperability, some
>of which Ken mentioned, most specifically timing and control
>interactions between programs, that are not actually semantic as far as
>data meaning is concerned and are not the responsibility of the COSMO.
>The outcome of interactions between intelligent agent may well depend
>on factors other than the ontologies (knowledge) possessed by those
>agents.
>
>
>(9) Do we need more preliminary discussion?
>
>Gary Berg-Cross said:
>
>
>
>>>The question is, in the 4 part approach to a framework item, 4 jumps
>>>
>>>
>out at me as something that needs more discussion. What are the
>methodologies for organizing the hubs and modules, relating them to one
>another?
>
>[PC] The relations among elements of a lattice or between the core and
>hubs will be much easier to explore after we have built at least a
>minimal core ontology. We aren't starting from scratch, we have more
>than ten years of ontology development experience to draw from. We
>should be able to make rapid progress on some basic structures - if we
>work on them - and then the more complex issues may become more
>immediate.
>
>
>... and further:
>
>
>>>[G. B-C] What can we do in this regard? Perhaps it is to use the
>>>
>>>
>idea of prototyping some of these critical issues to mitigate risk.
>In a prototype we aren't going down a long road but have some
>specific issues we want to investigate with a testable product
>resulting.
>
>[PC] I think we should begin inferencing tests (the beginnings of
>prototyping) on the earliest stages of the COSMO as it is developing.
>I agree we need concrete data to guide development.
>
>
>(10) So where do we start?
>
>
>
>>>>[Barry Smith]
>>>>
>>>>
> > Can we start to work out what the terms of the taxonomy should
> > be. Just a few. perhaps, to get us going. And what the
> > corresponding axioms will be?
>
>
>
>>>>[John Sowa}That would be a very useful exercise.
>>>>
>>>>
>
>Hey, guys! Over here! What about the merged ontology I put up on our
>web site? :-(
> http://colab.cim3.net/cgi-bin/wiki.pl?CosmoWG/TopLevel
>
>
>If you think the TopLevel it is so utterly useless as to not be worth
>discussing, fine, but please say so and say why and propose an
>alternative. Should we start even smaller? The OWL version has a few
>instances in it that can serve to illustrate some elementary DL
>reasoning methods. I think we need to demonstrate inference at the
>earliest stage.
>
>Barry did make a few comments on specific Cyc classes, but we can
>discuss those in more detail if those are the only ones that are
>problematic.
>
>I believe there is a lot of benefit in starting from already
>well-developed ontologies.
>
>So, OK,let's get on with it. Please make specific proposals (classes,
>relations), comment on those proposals, and let's find out who actually
>disagrees with what.
>
>Pat
>
>Patrick Cassidy
>MITRE Corporation
>260 Industrial Way
>Eatontown, NJ 07724
>Mail Stop: MNJE
>Phone: 732-578-6340
>Cell: 908-565-4053
>Fax: 732-578-6012
>Email: pcassidy@xxxxxxxxx
>
>
>
>
> (011)
_________________________________________________________________
Message Archives: http://colab.cim3.net/forum/ontac-dev/
To Post: mailto:ontac-dev@xxxxxxxxxxxxxx
Subscribe/Unsubscribe/Config: http://colab.cim3.net/mailman/listinfo/ontac-dev/
Shared Files: http://colab.cim3.net/file/work/SICoP/ontac/
Community Wiki:
http://colab.cim3.net/cgi-bin/wiki.pl?SICoP/OntologyTaxonomyCoordinatingWG (012)
|