ontac-forum
[Top] [All Lists]

RE: [ontac-forum] Follow up question on Ontology, knowledge, language co

To: "ONTAC-WG General Discussion" <ontac-forum@xxxxxxxxxxxxxx>
From: "Obrst, Leo J." <lobrst@xxxxxxxxx>
Date: Fri, 7 Oct 2005 17:13:32 -0400
Message-id: <9F771CF826DE9A42B548A08D90EDEA807E97A7@xxxxxxxxxxxxxxxxx>
I'll weigh in to support Pat on a couple of these points. I apologize in advance if I get too technical.
 
The whole point of using logic for ontologies and for expressing natural language semantics is to use a formal language in which the meanings of ambiguous natural language statements can be stated unambiguously, i.e., teasing out those distinct elements of ambiguity and representing them and showing the dimensions of ambiguity. If I say "the tank next to the bank", there  are at least 4 possible interpretations/meanings: 1) the military vehicle next to the river bank, 2) the military vehicle next to the financial bank building, 3) the liquid container next to the river bank, 4) the liquid container next to the financial bank building. With additional natural language, as in 1-4, we can tease out the ambiguities, but using logic we can formally represent those distinctions in a formal language that machines can use.
 
We use human language terms to label ontology concepts because those terms are typically readily available to us human beings in that language (English, Chinese, etc.) and we tend to largely intuitively agree on their meaning.  And terms and concepts are quite distinct items: terms are labels that index the concepts which express and model the meaning of those labels.
 
So the use of these labeling terms is really to aid humans who look over the ontology concepts (represented formally in an ontology), and say, yes, this label "Person" for the concept Person with these formally represented relations, properties, superclasses, subclasses, and axioms is really what I mean by the English word "person", or is at least an approximation of what I mean. I.e., a person is necessarily all those things but may in addition be other things, that is, we try to initially capture the "necessary" conditions and over time capture other "sufficient" conditions. Humans necesarily are mammals and have parents, but only sometimes like to chew gum or sometimes do not have addresses. If you don't have an address, you still are human.
 
Most semanticists in natural language use what's called "model-theoretic semantics" to express the set of formal models which are licensed by the logical statements/expressions: i.e., you go from the axioms to the formal models in ontological engineering just like you go from the natural language sentences (of English, Chinese, etc..) as expressed in logical statements to their formal models (typically represented mathematically in set theory or structures using set theory).
 
Why? Because this syntax-to-semantic (axioms to models) mapping enables you to characterize what you "really mean" and compare that to what you "intend to mean." Example: you might have axioms about parent and children classes in an ontology, i.e., parent is a role of a person (one can be both a parent and a child, an employee, a carpenter, an author, stamp collector, etc.), but forgot to include an axiom which states that no parent can be his/her own parent, nor can be his/her own child.
 
You may not see this lapse in your ontology axioms, but on looking at the formal models licensed by those axioms, you will see these unintended models (this is Mike Gruninger's point, I think), i.e., unless you axiomatize explicitly against a parent being his/her own parent, you will get formal models in which John is his own parent and his own child -- NOT what you intend, I think, if you really want to capture the real world relationships.
 
So axioms and models (syntax and semantics) help us to gauge what we really are modeling when we create an ontology which tries to model the real world. Other less formal languages (without a logic behind them) such as XML, UML, etc., cannot help us.
 
Additionally one point is that humans tend to use language in a way that helps us to label and then link the important concepts and combinations of concepts that are necessary for us to communicate to other humans. So you will probably as a human have a concept correlated to the term "person" but maybe not a direct concept correlated to the phrase (terms in a syntactically correct sequence) "a person who eats broccoli while reading the newspaper". That phrase is indeed expressible using natural language and links concepts like "person", "someone who eats broccoli", and "someone who reads the newspaper", but you don't need a single concept for that, just a composition of concepts.
 
Finally, ontology precedes epistemology (not to get into philosophical arguments!): you can only ground belief on knowledge, i.e., evidence on what you do know. You may not know which of 3 birth dates a prospective terrorist has (you have evidence for all 3), but you do know that all humans have only one birth date.
 
Leo


From: ontac-forum-bounces@xxxxxxxxxxxxxx [mailto:ontac-forum-bounces@xxxxxxxxxxxxxx] On Behalf Of Cassidy, Patrick J.
Sent: Friday, October 07, 2005 8:16 AM
To: ONTAC-WG General Discussion
Subject: RE: [ontac-forum] Follow up question on Ontology, knowledge,language confusion

Gary,
   Concerning your question:
>>>  The thrust of my issue is that while a completed ontology might avoid the issues of language and knowledge, the process of developing an ontology runs into both of these.  Our expertise has knowledge problems and our discussion of different concepts to merge uses language to communicate our ideas on this and so the resulting ontology product may reflect these problems....Of course we may have methods to resolve these, but shouldn't think that an automatic process.  Ontologies aren't built by tools, but by us using tools.
 
Yes, in developing ontologies, whether alone, or collaboratively, we do rely on our language to describe the meanings that we want to associate with concepts.  But when we are trying to create unambiguous definitions, we tend to use only that portion of language that is itself fairly unambiguous, and in which the words are understood as labels for the corresponding concepts that are widely shared and agreed on -- i.e. we use a fairly precise and common defining vocabulary, when we want to be precise.  The example I provided was the Longman's Dictionary of Contemporary English, in which the lexicographers consciously restricted themselves to a fairly small set of about 2000 basic words (pointers to concepts) to define all of the 56,000 "words and phrases" in that dictionary.  These basic concepts would have to substitute for the knowledge people must acquire over years of living in the physical world, bumping into things, getting hungry and eating, interacting with other people, and learning how to create abstractions and to use the structures of common physical things and events ("head", "drop") to label more abstract non-observable derivative concepts . . .  they would have to substitute, *if* they were not also defined and constrained by axioms that restrict the ways they can relate to each other.  As Adam pointed out, when we add axioms to the concepts in an ontology, the ambiguity is reduced, and the concepts are no longer merely arbitrary symbols, since they are constrained to interact with the other symbols in well-defined ways.  And as Michael Gruninger pointed out, the ideal toward which we should try to develop our ontologies is the case where the axiomatization is sufficiently rich that the number of models that the definitions are consistent with is only and precisely those models that we intend to describe.  In this way those seemingly abstract mathematical structures in the computer will have a structure and behavior closely matching the structure and behavior of those real-world objects that they are intended to represent, and the computer should be able to do reliable inferencing about the real world by manipulating those symbols. We do start with language, but by using the most fundamental and precise and broadly agreed-on defining words, we can create a comparably precise conceptual defining vocabulary for the computer, and build up more complex concepts in the computer that do not have the ambiguity of natural-language words which may be used in different contexts to label multiple complex aggregates of concepts. 
 
I would in fact recommend for the ONTACWG that we adopt an English-language "defining vocabulary" similar to that used in LDOCE, but for which each word labels one concept in the existing ontology -- and use only that defining vocabulary to create the English-language definitions of the concepts we include in the ontology.  Then, when we find that the existing English-language defining vocabulary does not have the words we need to describe some new concept that we want to add, we will have a hint that there is some fundamental concept missing which should be added to the conceptual defining vocabulary.
 
The is another related issue that arises in considering the epistemology of computer knowledge; people wonder how a computer can get a firm "grounding" in reality.  In the simplest case, a computer may be restricted to doing in-memory processing of data structures, and the meanings of the data structures would rely totally on what the programmer intends them to mean, the computer would have no independent way to check.  But the computer is not totally without connections to the real world.  It has disks, keyboards, and other input/output devices, with which it could "experiment" and get feedback to verify that there really is some "real world" out there.  And when we reach the stage where the knowledge representation is sufficiently reliable for basic conceptual computational issues, we could fit the computer with more elaborate interactive devices to get a more "human" feeling for the nature of physical reality.  To some extent, robotic systems have to do that right now.  But the issues we are dealing with in this working group don't require that level of "direct physical knowledge" in the computer.  Doing the research to create more elaborate representations will be, I expect, a lot more efficient after some Common Semantic Model ("COSMO") has been widely adopted, and multiple groups can efficiently share and reuse the results of their research because it references a common paradigm for representing knowledge.  At that point the efficiency of research may reach the point where the epistemological issues can be investigated in a meaningful way.  I think the COSMO has to come first.
 
Pat
 
 

Patrick Cassidy
MITRE Corporation
260 Industrial Way
Eatontown, NJ 07724
Mail Stop: MNJE
Phone: 732-578-6340
Cell: 908-565-4053
Fax: 732-578-6012
Email: pcassidy@xxxxxxxxx

 


From: ontac-forum-bounces@xxxxxxxxxxxxxx [mailto:ontac-forum-bounces@xxxxxxxxxxxxxx] On Behalf Of Gary Berg-Cross
Sent: Thursday, October 06, 2005 11:38 AM
To: ontac-forum@xxxxxxxxxxxxxx
Subject: [ontac-forum] Follow up question on Ontology, knowledge,language confusion


 
Adan Pease pointed out that we often confuse the ambiguity in language and the locaility of knowledge with ontology.  I wanted to ask about this distinction but there there wasn't time for it, but it might be worth discussing here so our concepts and methods are clear.
 
The thrust of my issue is that while a completed ontology might avoid the issues of language and knowledge, the process of developing an ontology runs into both of these.  Our expertise has knowledge problems and our discussion of different concepts to merge uses language to communicate our ideas on this and so the resulting ontology product may reflect these problems....Of course we may have methods to resolve these, but shouldn't think that an automatic process.  Ontologies aren't built by tools, but by us using tools.
 
Thoughts?
 
Gary Berg-Cross
EM&I
Potomac, MD

_________________________________________________________________
Message Archives: http://colab.cim3.net/forum/ontac-forum/
To Post: mailto:ontac-forum@xxxxxxxxxxxxxx
Subscribe/Unsubscribe/Config: 
http://colab.cim3.net/mailman/listinfo/ontac-forum/
Shared Files: http://colab.cim3.net/file/work/SICoP/ontac/
Community Wiki: 
http://colab.cim3.net/cgi-bin/wiki.pl?SICoP/OntologyTaxonomyCoordinatingWG    (01)
<Prev in Thread] Current Thread [Next in Thread>