Eric, Azamat, Matthew, Leo, Chuck, Cathy, and Pat, (01)
I believe that the issues we have been debating can be
clarified by drawing a fundamental distinction: (02)
- Vagueness: Lexicons and terminologies expressed in
natural languages, such as WordNet and the various
dictionaries designed for human consumption are vague.
It is possible for humans to reach agreement on such
matters *because* they are sufficiently vague that
different people can adapt the interpretations of the
same words in different ways in different situations. (03)
- Precision: Formal logics and computer programs must
be precise in order to support long chains of inference.
But for that reason, they are fragile and easily falsified.
Falsifiability, as Popper pointed out, is important in
science, because it enables clear predictions that can
be tested. But when you're crossing a bridge, you don't
want the theory on which it is based to be falsified.
That is why most bridges are "overdesigned" to accommodate
details that were not anticipated in the requirements. (04)
On this matter, I'll repeat Peirce's comment (CP 4.237): (05)
It is easy to speak with precision upon a general theme.
Only, one must commonly surrender all ambition to be certain. (06)
It is equally easy to be certain. One has only to be
sufficiently vague. (07)
It is not so difficult to be pretty precise and fairly
certain at once about a very narrow subject. (08)
We can reach some level of agreement on many general
principles because they are "sufficiently vague". That
includes most of the philosophical notions that people
want to put into upper-level ontologies. But at that
level, it is impossible to have axioms that are
simultaneously precise, detailed, and general. (09)
If we want to be "pretty precise and fairly certain",
we have to restrict our scope to "a very narrow subject."
In Cyc terms, that is a microtheory. That is where we
can put the axioms that are both precise and detailed. (010)
Some people have mentioned part-whole as a notion that
should be in an upper-level ontology. That's possible,
but only if it is left vague -- i.e., with few or no
axioms. As soon as you try to be detailed, you run
into an enormous number of conflicting interpretations.
For a sample of the many incompatible axiomatizations
of part-whole, I recommend Peter Simons' book: (011)
Simons, Peter (1987) _Parts: A Study in Ontology_,
Clarendon Press, Oxford. (012)
I'll try to relate this distinction to the email notes: (013)
EP> So when I talk about working toward a merged upper model,
> I'm attempting to have a much simpler discussion than I
> believe that you are.
>
> I claim that an ontological model suitable for merging
> databases needs no baroque axioms. (014)
That sounds as if you are asking for a vague upper level,
somewhat along the lines of a cleaned-up version of WordNet. (015)
EP> My claim is that microtheories are not needed for a
> database federating ontology. At best microtheories are
> arbitrary engineering conveniences. At worst they get in
> the way and bag up things that do not always belong together. (016)
Since most current databases do not have precisely defined
definitions of what goes into the various slots in the tables,
a vague upper level is probably sufficient. But that means
you cannot do long chains of inference, as many people want
to do. The detailed inferences can only be done with detailed
axioms of the kind that typically occur in microtheories. (017)
AA> ... knowledge machines with the built-in common ontology
> framework, a single code of fundamental standards, principles,
> rules, and laws, suggesting a broad, integrated model of things
> in the world. (018)
That is possible, provided that the framework is "sufficiently
vague" -- such as the dictionaries and encyclopedias designed
for human consumption. I have no quarrels with that, but it
should not be confused with what people call "formal ontology." (019)
AA> ... such an ontological groundwork began at the moment humans
> started to systematically meditate about the environmental
> categories such as being, reality, existence, cause, time, space,
> life, etc. As a result, any substantial doctrines, religious,
> ideological or political or social, are pervaded with ontological
> ideas. (020)
I agree. But if you ask a hundred people to state what they mean
by any of those terms, you will get 100 different definitions.
(If you ask 100 philosophers, you will get 500 definitions of each.)
All of those terms belong in a WordNet-style of terminology, but
trying to put any of them into a formal upper-level ontology will
create interminable debate, as in the past 6 years of SUO. (021)
MW> I find 3 key relations that turn up in any upper ontology that
> are important at any level, and particularly in a 4D ontology
> turn up everywhere. These are:
>
> - Whole-part
> - Class-member
> - Superclass-subclass (022)
I agree that those are fundamental. For classes, members, and
superclasses, I would suggest the formalism of whatever version
of logic is used to state the ontology. If you use predicate
calculus, for example, you can assign a monadic predicate P to
each class (or type, as I would prefer to say). Then (023)
x is a member of the class P iff P(x) is true. (024)
Q is a subclass of P iff for all x, Q(x) implies P(x). (025)
For whole-part, there are unsolvable problems if you try
to capture the informal notion in any axiomatization. One
solution is to stipulate a rather bland version with just
a few axioms, and then add any additional axioms needed in
the more specialized (but mutually inconsistent) microtheories. (026)
MW> The problem is that there is not just one unified way of
> looking at the universe, but a very large number of unified
> ways of looking at the universe. (027)
I certainly agree. (028)
LO> Throwing up your hands in either direction -- 1) we can have
> no ultimate common human understanding, and 2) we cannot formalize
> any common human understanding that is usable by machine -- is to
> me, something to be ascertained by way of empirical experiment and
> NOT ascertained ex nihilo, by philosophical or idealogical argument. (029)
I agree. I believe we can have two things: (1) A vague terminology
that covers everything that people want to say in whatever way they
prefer to say it, and (2) a collection of formal theories for every
one of the "language games" that anyone may choose to play. (030)
But, and this is a very big *BUT*, it is essential to be clear
about the differences between (1) and (2). It is possible to have
a mapping between them, but it is not one-to-one, and it most
definitely is not upper-level vs. low-level ontology. (031)
CT> ... what I think is lacking is a way to describe that process
> [of looking at the universe], and more importantly, to explicitly
> state the result of that process (the organization of everything,
> complete with identity and relation of all parts - in short, an
> ontology). (032)
Perhaps, but the problem is that there may be agreement among
many people in the choice of words, but the details of how they
define (or never define) those words vary from one individual
to another. Even worse, they vary from one situation to another,
even for the same individual. (033)
CT> ... Since this is so difficult (as seen by all the expressed
> inadequacies with any ontology thesis proposed), it may be the
> limitation of our current language to reflect what is going on. (034)
Actually, I would say that our natural languages, with all their
vagueness, are better able "to reflect what is going on." The
difficulties arise when we try to force our vague ideas into any
precise formal language. On this point, I like to quote another
logician and philosopher, Alfred North Whitehead: (035)
Human knowledge is a process of approximation. In the focus of
experience, there is comparative clarity. But the discrimination
of this clarity leads into the penumbral background. There are
always questions left over. The problem is to discriminate
exactly what we know vaguely. (036)
CL> I was interested to read your description of your evolution
> away from advocating axiom-rich ontologising towards recommending
> something much closer to Wordnet, indeed a system where (if I have
> it right) the only assertions in the ontology are purely definitional,
> concerning the meanings of terms, and no empirical claims are made.
> This sounds prima facie plausible. However, can a sharp distinction
> be made between these two categories of claim? I don’t think so. (037)
I agree. That was the point of the debate between Carnap and Quine.
Carnap proposed the term "meaning postulate" and defined the notion
of a proposition p being "analytic in language L" iff it is provable
from the meaning postulates of language L. Quine demolished
Carnap's position with regard to natural language in his essay
"Two Dogmas of Empiricism". I believe that both Carnap and Quine
were right: you can't draw a firm distinction in any natural
language, but you can stipulate a set of conventions in some formal
language L. Those are are the kinds of conventions I would put
in an upper ontology -- but with the caveat that they are *not*
intended to define the words of any natural language. (038)
CL> Consider for instance, “Electrons have a negative charge”. This
> was an empirical claim at some early stage of atomic theory yet
> is now analytically true. “Cats have whiskers” – empirical or
> definitional?
>
> To give just one example, ‘person’ (the legal – social entity)
> was separated out from ‘human being’ (the species) and various
> inferences which depended on those two concepts being munged
> together then broke…. (039)
I would never put the definitions of Electron, Cat, HumanBeing,
Person, or many other common terms in the upper ontology. I
would, however, be willing to have a drastically underspecified
type hierarchy that says that Cat and HumanBeing are subtypes
of Animal or that Person is a role that a HumanBeing may play
but that social organizations could also play. (040)
I'd be willing to say that cats, dogs, and human beings are
disjoint, but I wouldn't attempt to say what distinguishes
them. You would need different microtheories for a zoologist
and for a child who was just learning to talk about Mommy,
Daddy, and Kitty. (041)
PC> If any two ontology-driven systems want to work together,
> that upper ontology, WHATEVER IT IS, has to be the same. (042)
That is false. Any two systems that interoperate (human or
computer) *always* interoperate on a specific task or set of
related tasks. The microtheory or theories that characterize
those tasks are where precise agreements must be negotiated. (043)
I'll admit that vague ontologies, at the level of WordNet,
can be aligned because they have very few axioms, but the
detailed alignments must be done at the task level. (044)
PC> So let us not confuse the issue by failing to recognize
> the distinction between one group choosing an upper ontology
> (or no upper ontology) for its own isolated purposes, and
> multiple groups choosing an upper ontology for communication
> between semantic systems. (045)
That is just a tiny subset of all the distinctions that can
and should be made. When multiple groups agree on a precisely
defined standard for anything, it is always for a very narrow
microtheory. Global agreements are inevitably vague. (046)
MW> If you don't have a common upper ontology you do need to
> have a mapping between the upper ontologies you do have. What
> is true that having a common upper ontology give efficiency
> benefits as the number of ontologies you are trying to integrate
> rises. At about 4 ontologies a common ontology (at all levels
> that are shared) pays off. (047)
Three points: (048)
1. Note the parenthetical expression "at all levels that are
shared". Categories that are not shared need not be aligned. (049)
2. Since Matthew is a strong advocate of 4D ontologies and many
other ontologies use a 3D approach, I suspect that he has
not been forcing those details to be aligned. (050)
3. By "efficiency", I suspect that Matthew means that the more
ontologies you try to align, the more troublesome details
are removed from the upper levels and put into the more
specialized microtheories. (051)
Summary: Global agreement is possible for vague things like
WordNet or for things like OpenCyc, in which most of the axioms
are omitted. Interoperability on the details must always be
specified at the level of one or more microtheories. (052)
John Sowa (053)
_________________________________________________________________
Message Archives: http://colab.cim3.net/forum/ontac-forum/
To Post: mailto:ontac-forum@xxxxxxxxxxxxxx
Subscribe/Unsubscribe/Config:
http://colab.cim3.net/mailman/listinfo/ontac-forum/
Shared Files: http://colab.cim3.net/file/work/SICoP/ontac/
Community Wiki:
http://colab.cim3.net/cgi-bin/wiki.pl?SICoP/OntologyTaxonomyCoordinatingWG (054)
|