I have sent a request for clarification of her first two comments to
Denise. My responses to the last 3 are below; (01)
>> (3) clearly state what the end game of an ontology is (there are of
course
multiple end games, but we need to begin to at least define what some
of them
are - otherwise they are motherhood, apple pie and everything in
between); (02)
There are several goals for the ONTACWG, but the more technical issues
depend on adopting a Common Semantic Model (COSMO), which would be an
ontology -- it could be an existing one or one that is constructed from
existing ones by the ONTACWG. The purpose of the COSMO would be to
specify the meanings of domain-specific terms and relations using the
COSMO so that applications will be able to interpret those terms
consistently for logical inferencing purposes. This is not exactly
"motherhood", as somewhat less than a majority of people understand
what a precise logical definition of a semantic relation would look
like. Funny thing you should mention "motherhood". My slide 7 uses
that as an example of a simple logical specification of a semantic
relation. (03)
>> (4) distinguish between tools and techniques that can be used to
build an
ontology, and begin to identify where these tools and techniques are
best used
in the development of a robust ontology model; (04)
There are several ontology-building tools available. The subWg most
concerned with actually building an ontology will be the COSMO WG and
we will surely discuss what tools we think will be useful.
Did you have in mind specific tools that you would recommend? (05)
>> and (5) review what work has already been done that is not labelled
'ontology'
per se but actually does move towards the end game. (06)
One main purpose of the "Coordinating" working group is precisely to
try to attain a global view of all work done on knowledge
classifications systems, which include, as mentioned in the charter,
ontologies, taxonomies, thesauri, and graphical representations such as
UML, CMAP, Topic Maps, and MOF specifications. It also includes not
only work that is completed, but to the extent possible a listing of
work that is in progress so that unintended duplication of effort on
the same topic can be avoided. We want to maintain one or more
resource pages on the ONTACWG Wiki with pointers and perhaps also
descriptions of that work. We are requesting all members of ONTACWG to
post references to any work they are aware of -- finished or ongoing --
to our "PointerPage" -- (07)
http://colab.cim3.net/cgi-bin/wiki.pl?OntologyTaxonomyCoordinatingWG/Po
interPage (08)
Or, you can send it to me and I will collate and try to eliminate
duplications. Such references will, I think, help achieve the point 5
that you raise. If anyone is willing to actually do a comparative
review of the available resources, or refer us to such a review already
done, that would be a great contribution. (09)
Pat (010)
Patrick Cassidy
MITRE Corporation
260 Industrial Way
Eatontown, NJ 07724
Mail Stop: MNJE
Phone: 732-578-6340
Cell: 908-565-4053
Fax: 732-578-6012
Email: pcassidy@xxxxxxxxx (011)
-----Original Message-----
From: ontac-forum-bounces@xxxxxxxxxxxxxx
[mailto:ontac-forum-bounces@xxxxxxxxxxxxxx] On Behalf Of
dbedford@xxxxxxxxxxxxx
Sent: Friday, October 07, 2005 9:23 AM
To: ONTAC-WG General Discussion
Cc: ONTAC-WG General Discussion; ontac-forum-bounces@xxxxxxxxxxxxxx
Subject: RE: [ontac-forum] Follow up question on Ontology, knowledge,
languageconfusion (012)
All, (013)
I was not able to participate in your session on Wednesday because I
was giving
an all day workshop on search systems. I must say, though, that I'm a
bit
concerned that the group seems to be starting a discussion at what I
would call
level 0 rather than to be sharing practical experience from
organizations which
might be at level 2 or 3. I would like to suggest that this group: (014)
(1) challenge/test the basic assumption that the current ontology model
is
sufficiently well developed and tested to use as a baseline for any
real
practical development -- I think there are several flaws in the model;
(2) test that ontology model for scalability and extensibility (I don't
think it
will pass this test);
(3) clearly state what the end game of an ontology is (there are of
course
multiple end games, but we need to begin to at least define what some
of them
are - otherwise they are motherhood, apple pie and everything in
between);
(4) distinguish between tools and techniques that can be used to build
an
ontology, and begin to identify where these tools and techniques are
best used
in the development of a robust ontology model;
and (5) review what work has already been done that is not labelled
'ontology'
per se but actually does move towards the end game. (015)
For several years now we have been talking at the word level -- way too
much
progress has been made beyond this level for us to still be
"struggling" with
these issues. We cannot afford to still be talking at this level.
And,
frankly, the structures for dealing with these issues are already
available to
us and in some cases are in place. (016)
Please forgive my frankness but there's a lot of practical experience
around in
solving these kinds of problems -- we need to leverage that experience,
not try
to reinvent it. (017)
Best regards,
Denise (018)
"Cassidy, (019)
Patrick J." (020)
<pcassidy@mitre
To
.org> "ONTAC-WG General Discussion" (021)
Sent by: <ontac-forum@xxxxxxxxxxxxxx> (022)
ontac-forum-bou
cc
nces@xxxxxxxxxx (023)
.net
Subject
RE: [ontac-forum] Follow up
question on
Ontology, knowledge, language (024)
10/07/2005 confusion (025)
08:15 AM (026)
Please respond (027)
to (028)
ONTAC-WG (029)
General (030)
Discussion (031)
<ontac-forum@co (032)
lab.cim3.net> (033)
Gary,
Concerning your question:
>>> The thrust of my issue is that while a completed ontology might
avoid the
issues of language and knowledge, the process of developing an ontology
runs
into both of these. Our expertise has knowledge problems and our
discussion of
different concepts to merge uses language to communicate our ideas on
this and
so the resulting ontology product may reflect these problems....Of
course we may
have methods to resolve these, but shouldn't think that an automatic
process.
Ontologies aren't built by tools, but by us using tools. (034)
Yes, in developing ontologies, whether alone, or collaboratively, we do
rely on
our language to describe the meanings that we want to associate with
concepts.
But when we are trying to create unambiguous definitions, we tend to
use only
that portion of language that is itself fairly unambiguous, and in
which the
words are understood as labels for the corresponding concepts that are
widely
shared and agreed on -- i.e. we use a fairly precise and common
defining
vocabulary, when we want to be precise. The example I provided was the
Longman's Dictionary of Contemporary English, in which the
lexicographers
consciously restricted themselves to a fairly small set of about 2000
basic
words (pointers to concepts) to define all of the 56,000 "words and
phrases" in
that dictionary. These basic concepts would have to substitute for the
knowledge people must acquire over years of living in the physical
world,
bumping into things, getting hungry and eating, interacting with other
people,
and learning how to create abstractions and to use the structures of
common
physical things and events ("head", "drop") to label more abstract
non-observable derivative concepts . . . they would have to
substitute, *if*
they were not also defined and constrained by axioms that restrict the
ways they
can relate to each other. As Adam pointed out, when we add axioms to
the
concepts in an ontology, the ambiguity is reduced, and the concepts are
no
longer merely arbitrary symbols, since they are constrained to interact
with the
other symbols in well-defined ways. And as Michael Gruninger pointed
out, the
ideal toward which we should try to develop our ontologies is the case
where the
axiomatization is sufficiently rich that the number of models that the
definitions are consistent with is only and precisely those models that
we
intend to describe. In this way those seemingly abstract mathematical
structures in the computer will have a structure and behavior closely
matching
the structure and behavior of those real-world objects that they are
intended to
represent, and the computer should be able to do reliable inferencing
about the
real world by manipulating those symbols. We do start with language,
but by
using the most fundamental and precise and broadly agreed-on defining
words, we
can create a comparably precise conceptual defining vocabulary for the
computer,
and build up more complex concepts in the computer that do not have the
ambiguity of natural-language words which may be used in different
contexts to
label multiple complex aggregates of concepts. (035)
I would in fact recommend for the ONTACWG that we adopt an
English-language
"defining vocabulary" similar to that used in LDOCE, but for which each
word
labels one concept in the existing ontology -- and use only that
defining
vocabulary to create the English-language definitions of the concepts
we include
in the ontology. Then, when we find that the existing English-language
defining
vocabulary does not have the words we need to describe some new concept
that we
want to add, we will have a hint that there is some fundamental concept
missing
which should be added to the conceptual defining vocabulary. (036)
The is another related issue that arises in considering the
epistemology of
computer knowledge; people wonder how a computer can get a firm
"grounding" in
reality. In the simplest case, a computer may be restricted to doing
in-memory
processing of data structures, and the meanings of the data structures
would
rely totally on what the programmer intends them to mean, the computer
would
have no independent way to check. But the computer is not totally
without
connections to the real world. It has disks, keyboards, and other
input/output
devices, with which it could "experiment" and get feedback to verify
that there
really is some "real world" out there. And when we reach the stage
where the
knowledge representation is sufficiently reliable for basic conceptual
computational issues, we could fit the computer with more elaborate
interactive
devices to get a more "human" feeling for the nature of physical
reality. To
some extent, robotic systems have to do that right now. But the issues
we are
dealing with in this working group don't require that level of "direct
physical
knowledge" in the computer. Doing the research to create more
elaborate
representations will be, I expect, a lot more efficient after some
Common
Semantic Model ("COSMO") has been widely adopted, and multiple groups
can
efficiently share and reuse the results of their research because it
references
a common paradigm for representing knowledge. At that point the
efficiency of
research may reach the point where the epistemological issues can be
investigated in a meaningful way. I think the COSMO has to come first. (037)
Pat (038)
Patrick Cassidy
MITRE Corporation
260 Industrial Way
Eatontown, NJ 07724
Mail Stop: MNJE
Phone: 732-578-6340
Cell: 908-565-4053
Fax: 732-578-6012
Email: pcassidy@xxxxxxxxx (039)
From: ontac-forum-bounces@xxxxxxxxxxxxxx
[mailto:ontac-forum-bounces@xxxxxxxxxxxxxx] On Behalf Of Gary
Berg-Cross
Sent: Thursday, October 06, 2005 11:38 AM
To: ontac-forum@xxxxxxxxxxxxxx
Subject: [ontac-forum] Follow up question on Ontology,
knowledge,language
confusion (040)
Adan Pease pointed out that we often confuse the ambiguity in
language and
the locaility of knowledge with ontology. I wanted to ask about
this
distinction but there there wasn't time for it, but it might be
worth
discussing here so our concepts and methods are clear. (041)
The thrust of my issue is that while a completed ontology might
avoid the
issues of language and knowledge, the process of developing an
ontology
runs into both of these. Our expertise has knowledge problems
and our
discussion of different concepts to merge uses language to
communicate our
ideas on this and so the resulting ontology product may reflect
these
problems....Of course we may have methods to resolve these, but
shouldn't
think that an automatic process. Ontologies aren't built by
tools, but by
us using tools. (042)
Thoughts? (043)
Gary Berg-Cross
EM&I
Potomac, MD
_________________________________________________________________
Message Archives: http://colab.cim3.net/forum/ontac-forum/
To Post: mailto:ontac-forum@xxxxxxxxxxxxxx
Subscribe/Unsubscribe/Config:
http://colab.cim3.net/mailman/listinfo/ontac-forum/
Shared Files: http://colab.cim3.net/file/work/SICoP/ontac/
Community Wiki: (044)
http://colab.cim3.net/cgi-bin/wiki.pl?SICoP/OntologyTaxonomyCoordinatin
gWG (045)
_________________________________________________________________
Message Archives: http://colab.cim3.net/forum/ontac-forum/
To Post: mailto:ontac-forum@xxxxxxxxxxxxxx
Subscribe/Unsubscribe/Config:
http://colab.cim3.net/mailman/listinfo/ontac-forum/
Shared Files: http://colab.cim3.net/file/work/SICoP/ontac/
Community Wiki:
http://colab.cim3.net/cgi-bin/wiki.pl?SICoP/OntologyTaxonomyCoordinatin
gWG (046)
_________________________________________________________________
Message Archives: http://colab.cim3.net/forum/ontac-forum/
To Post: mailto:ontac-forum@xxxxxxxxxxxxxx
Subscribe/Unsubscribe/Config:
http://colab.cim3.net/mailman/listinfo/ontac-forum/
Shared Files: http://colab.cim3.net/file/work/SICoP/ontac/
Community Wiki:
http://colab.cim3.net/cgi-bin/wiki.pl?SICoP/OntologyTaxonomyCoordinatingWG (047)
|