To: | "ONTAC-WG General Discussion" <ontac-forum@xxxxxxxxxxxxxx> |
---|---|
From: | "Obrst, Leo J." <lobrst@xxxxxxxxx> |
Date: | Fri, 7 Oct 2005 17:13:32 -0400 |
Message-id: | <9F771CF826DE9A42B548A08D90EDEA807E97A7@xxxxxxxxxxxxxxxxx> |
I'll weigh in to support Pat on a couple of these points. I
apologize in advance if I get too technical.
The whole point of using logic for ontologies and for
expressing natural language semantics is to use a formal language in which the
meanings of ambiguous natural language statements can be stated unambiguously,
i.e., teasing out those distinct elements of ambiguity and representing them
and showing the dimensions of ambiguity. If I say "the tank next to
the bank", there are at least 4 possible interpretations/meanings: 1) the
military vehicle next to the river bank, 2) the military vehicle next to the
financial bank building, 3) the liquid container next to the river bank, 4)
the liquid container next to the financial bank building. With additional
natural language, as in 1-4, we can tease out the ambiguities, but using logic
we can formally represent those distinctions in a formal language that
machines can use.
We use human language terms to label ontology concepts
because those terms are typically readily available to us human beings in that
language (English, Chinese, etc.) and we tend to largely intuitively agree
on their meaning. And terms and concepts are quite distinct
items: terms are labels that index the concepts which express and model the
meaning of those labels.
So the use of these labeling terms is really to aid humans
who look over the ontology concepts (represented formally in an ontology), and
say, yes, this label "Person" for the concept Person with these formally
represented relations, properties, superclasses, subclasses, and axioms is
really what I mean by the English word "person", or is at least an approximation
of what I mean. I.e., a person is necessarily all those things but may in
addition be other things, that is, we try to initially capture the "necessary"
conditions and over time capture other "sufficient" conditions. Humans
necesarily are mammals and have parents, but only sometimes like to chew
gum or sometimes do not have addresses. If you don't have an address, you still
are human.
Most semanticists in natural language use what's called
"model-theoretic semantics" to express the set of formal models which are
licensed by the logical statements/expressions: i.e., you go from the axioms to
the formal models in ontological engineering just like you go from the natural
language sentences (of English, Chinese, etc..) as expressed in logical
statements to their formal models (typically represented mathematically in set
theory or structures using set theory).
Why? Because this syntax-to-semantic (axioms to models)
mapping enables you to characterize what you "really mean" and compare that to
what you "intend to mean." Example: you might have axioms about parent and
children classes in an ontology, i.e., parent is a role of a person (one can be
both a parent and a child, an employee, a carpenter, an author, stamp collector,
etc.), but forgot to include an axiom which states that no parent can be his/her
own parent, nor can be his/her own child.
You may not see this lapse in your ontology axioms, but on
looking at the formal models licensed by those axioms, you will see these
unintended models (this is Mike Gruninger's point, I think), i.e., unless you
axiomatize explicitly against a parent being his/her own parent, you will get
formal models in which John is his own parent and his own child -- NOT what you
intend, I think, if you really want to capture the real world relationships.
So axioms and models (syntax and semantics) help us to
gauge what we really are modeling when we create an ontology which tries to
model the real world. Other less formal languages (without a logic behind
them) such as XML, UML, etc., cannot help us.
Additionally one point is that humans tend to use
language in a way that helps us to label and then link the important concepts
and combinations of concepts that are necessary for us to communicate to other
humans. So you will probably as a human have a concept correlated to
the term "person" but maybe not a direct concept correlated to the phrase
(terms in a syntactically correct sequence) "a person who eats broccoli while
reading the newspaper". That phrase is indeed expressible using natural language
and links concepts like "person", "someone who eats broccoli", and "someone who
reads the newspaper", but you don't need a single concept for that, just a
composition of concepts.
Finally, ontology precedes epistemology (not to get into
philosophical arguments!): you can only ground belief on knowledge,
i.e., evidence on what you do know. You may not know which of 3 birth dates
a prospective terrorist has (you have evidence for all 3), but you do know that
all humans have only one birth date.
Leo
_________________________________________________________________ Message Archives: http://colab.cim3.net/forum/ontac-forum/ To Post: mailto:ontac-forum@xxxxxxxxxxxxxx Subscribe/Unsubscribe/Config: http://colab.cim3.net/mailman/listinfo/ontac-forum/ Shared Files: http://colab.cim3.net/file/work/SICoP/ontac/ Community Wiki: http://colab.cim3.net/cgi-bin/wiki.pl?SICoP/OntologyTaxonomyCoordinatingWG (01) |
<Prev in Thread] | Current Thread | [Next in Thread> |
---|---|---|
|
Previous by Date: | RE: [ontac-forum] Denise's point challenging/test the basic assum ption that the current ontology model is sufficiently well developed and tested to use as a baseline, Warzel, Denise (NIH/NCI) |
---|---|
Next by Date: | Re: [ontac-forum] Special Free Public Meeting of the Semantic Interoperability Community of Practice (SICoP), October 12, 2005, marc . wine |
Previous by Thread: | RE: [ontac-forum] Denise's point challenging/test the basic assum ption that the current ontology model is sufficiently well developed and tested to use as a baseline, Warzel, Denise (NIH/NCI) |
Next by Thread: | Re: [ontac-forum] Follow up question on Ontology, knowledge, language confusion [LONG], John F. Sowa |
Indexes: | [Date] [Thread] [Top] [All Lists] |