ontac-forum
[Top] [All Lists]

RE: [ontac-forum] Problems of ontology

To: "ONTAC-WG General Discussion" <ontac-forum@xxxxxxxxxxxxxx>
From: "Cassidy, Patrick J." <pcassidy@xxxxxxxxx>
Date: Sun, 14 May 2006 21:03:19 -0400
Message-id: <6ACD6742E291AF459206FFF2897764BEC38DAC@xxxxxxxxxxxxxxxxx>
Concerning John's "interoperability" analogy:    (01)

[JS]
>>People have been building interoperable computer systems for the
past 50 years (and for many centuries before that, people have
been using interoperable systems for banking, plumbing, railroads,
telegraph, electrical transmission, etc.).  None of them have ever
had agreement on anything more than the local task-oriented aspects
-- i.e., the details of the message formats, the signals that are
transmitted, or the size and shape of the connectors (e.g., the
plumbing or the railroad track).    (02)

Of course.  Those systems didn't do any reasoning with the boxcars,
plumbing fixtures, electric current or anything else they shared.  If
the ideas that are transmitted among people when they talk were as
simple as electric current, a pair of railroad tracks, or the opening
in a lead pipe, then the problem of semantic interoperability would
have been solved a year after the first electronic computer was turned
on.    (03)

I would argue that every system that successfully interoperates at any
level shares some upper ontology, whether explicitly or implicitly.
The complexity of that upper ontology required depends on the
complexity of the information being transmitted, and the need for
interpretation.  The "upper ontology" can be as simple as "string",
"number", and "image" if two computer systems do not need to do the
same reasoning with the information, just display the information.  If
they need to look inside the message and come to the same conclusions
from the same data, their "common upper ontology" needs to be more
complex.  Up to now, that "upper ontology" typically has resided in the
procedural mechanism of the two programs, which have to do the same
data manipulation, though not necessarily in exactly the same way.    (04)

Why are we beating the dead horse of "complete" knowledge???  There is
almost always more to learn about some particular thing.  To my
knowledge, there is no disagreement about the proposition (from Bundy):
       "knowledge bases only need to be good enough for their
application"
But if the "application" is to accurately interpret some complex
information transmitted by another computer (or person), then "good
enough" may have to be very complex indeed, and there may need to be a
very large amount of agreement in the *relevant* parts of the two
ontologies to achieve the level of accuracy desired.    (05)

If we don't have all the information we want, we work with what we have
- that will be true of computers as it has always been for people.  So
why is it a revelation for Bundy to say:    (06)

"Our mechanism does not assume complete access to all the ontologies
  whose mismatches are to be resolved. Indeed, we argue that
  complete access will often be unattainable for commercial
  or technical reasons, e.g., because the ontologies are being
  generated dynamically."    (07)

Of course.  If people can't get all the information they could use, and
find approximate interpretations useful, then the approximations should
be used, even "often".  But it is also true that the more knowledge one
does share with a communicant, the better the chances of correct
interpretation ("resolving mismatches") will be.  And that may also be
necessary "often", and the corresponding complete access of
communicating systems to the relevant parts of their respective
ontologies may well be available, even "often".  If the communicating
parties are willing to make their upper ontologies mutually available,
why would we not want a common ontology standard to make them mutually
comprehensible????      (08)

The point of having a common upper ontology is that by that means it is
possible to have a much higher level of accuracy in interpretation than
by "mediating" among different ontologies, and if there are
applications that can use such a common upper ontology, they should use
it.  But they can't use it if it doesn't exist.  We need a common upper
ontology for those applications that will be able to use it to good
effect.  What fraction of applications that will be is rather risky to
predict.  It depends on the degree of semantics in the applications,
and the need for accuracy in the interpretation of communicated
information.    (09)

Interesting that Bundy considers the "BaseKB" of Cyc (thousands of
concepts) as not significant:
      " But even Cycorp is learning to cure itself of this heresy, by
tailoring particular knowledge bases
to particular applications, underpinned by a common core."    (010)

Sure.  Try it without a "common core".  Good luck.    (011)

There are plenty of applications that don't need a CUO.  I have a lot
of respect for people who are trying to do the best they can to achieve
some level of interoperability now, struggling along in the absence of
any widely accepted upper ontology standard.  But I am mystified by
people who seem to think that there is no place for a CUO in the
national information infrastructure.  A CUO isn't absolutely required
unless you want computers to automatically and accurately interpret
semantically rich information.  I suspect that such applications will
increasingly become very common.  They will get here sooner and work
better when we have a CUO that will let them work optimally.    (012)

Pat    (013)

Patrick Cassidy
MITRE Corporation
260 Industrial Way
Eatontown, NJ 07724
Mail Stop: MNJE
Phone: 732-578-6340
Cell: 908-565-4053
Fax: 732-578-6012
Email: pcassidy@xxxxxxxxx    (014)


-----Original Message-----
From: ontac-forum-bounces@xxxxxxxxxxxxxx
[mailto:ontac-forum-bounces@xxxxxxxxxxxxxx] On Behalf Of John F. Sowa
Sent: Saturday, May 13, 2006 11:08 PM
To: ontac-forum@xxxxxxxxxxxxxx
Subject: [ontac-forum] Problems of ontology    (015)

Folks,    (016)

I recently returned from the FLAIRS-06 conference, where Alan Bundy
gave a talk that supports many of the points I have been trying to
make in various discussions:  A single, unified upper ontology is
impossible to achieve, and it's not necessary for interoperability.    (017)

As an example of a contrary view, I'll quote some old email from
March 2:    (018)

Mike Uschold:    (019)

 > The issue of import is whether we can agree on some variation of:
 >
 > "A common upper ontology is essential for achieving affordable and
 > scalable semantic interoperability.  Summit participants will
explore
 > alternative approaches to developing or establishing this common
upper
 > ontology."
 >
 > My original comment was: I cannot endorse this statement for two
 > reasons.
 >
 > 1. I don't know that it is 'essential'.
 > 2. I don't believe is possible to have a single CUO.    (020)

Nicola Guarino:    (021)

 > I agree, although, if it's pretty clear that an ULO is not essential
 > for semantic interoperability, it *seems* indeed essential for
 > *affordable and scalable* semantic interoperability. The latter is
 > hard to prove, however.    (022)

I think the reason why it's hard to prove is that it's false.    (023)

People have been building interoperable computer systems for the
past 50 years (and for many centuries before that, people have
been using interoperable systems for banking, plumbing, railroads,
telegraph, electrical transmission, etc.).  None of them have ever
had agreement on anything more than the local task-oriented aspects
-- i.e., the details of the message formats, the signals that are
transmitted, or the size and shape of the connectors (e.g., the
plumbing or the railroad track).    (024)

Following is the abstract and introduction of Bundy's paper.    (025)

John Sowa
____________________________________________________________________    (026)

On Repairing Reasoning Reversals via Representational Refinements    (027)

Alan Bundy, Fiona McNeill and Chris Walton    (028)

Abstract    (029)

Representation is a fluent.  A mismatch between the real
world and an agent's representation of it can be signalled by
unexpected failures (or successes) of the agent's reasoning.
The `real world' may include the ontologies of other agents.
Such mismatches can be repaired by refining or abstracting
an agent's ontology. These refinements or abstractions may
not be limited to changes of belief, but may also change the
signature of the agent's ontology. We describe the implementation
and successful evaluation of these ideas in the ORS system.
ORS diagnoses failures in plan execution and then repairs the
faulty ontologies. Our automated approach to dynamic ontology
repair has been designed specifically to address real issues
in multi-agent systems, for instance, as envisaged in the
Semantic Web.    (030)

Introduction    (031)

The first author [AB] has a vivid memory of his introductory
applied mathematics lecture during his first year at university.
The lecturer delivered a sermon designed to rid the incoming
students of a heresy. This heresy was to entertain a
vision of a complete mathematical model of the world. The
lecturer correctly prophesied that the students were dissatis-
fied with the patent inadequacies of the mathematical models
they had learnt at school and impatient, now they had arrived
in the adult university world, to learn about sophisticated
models that were free of caveats such as treating the weight
of the string as negligible or ignoring the friction of
the pulley    (032)

They were to be disappointed. Complete mathematical
models of the real world were unattainable, because it was
infinitely rich. Deciding which elements of the world were
to be modelled and which could be safely ignored was the
very essence of applied mathematics. It was a skill that
students had to learn ?-- not one that they could render
redundant by modelling everything.    (033)

This all now seems obvious. AB is surprised at the naivety
of his younger self ? since, before the sermon, he certainly
was guilty of this very heresy. But it seems this lesson
needs to be constantly relearnt by the AI community. We too
model the real world, for instance, with symbolic representa-
tions of common-sense knowledge. We too become impatient
with the inadequacies of our models and strive to enrich
them. We too dream of a complete model of common-sense
knowledge and even aim to implement such a model,
cf. the Cyc Project. But even Cycorp is learning to cure
itself of this heresy, by tailoring particular knowledge bases
to particular applications, underpinned by a common core.    (034)

If we accept the need to free ourselves of this heresy and
accept that knowledge bases only need to be good enough
for their application, then there is a corollary that we must
also accept: the need for the knowledge in them to be fluent,
i.e., to change during its use. And, of course, we do accept
this corollary. We build adaptive systems that learn to tailor
their behaviour to a user or improve their capabilities over
time. We have belief-revision mechanisms, such as truth
maintenance (Doyle 1979), that add and remove knowledge
from the knowledge base.    (035)

However, it is the thesis of this paper that none of this goes
far enough. In addition, we must consider the dynamic evolution
of the underlying formalism in which the knowledge is represented.
To be concrete, in a logic-based representation the predicates
and functions, their arities and their types, may all need to
change during the course of reasoning.    (036)

Once you start looking, human common-sense reasoning
is full of examples of this requirement. Consider, for
instance, Joseph Black's discovery of latent heat. Before
Black, the concepts of heat and temperature were conflated.
It was thus a paradox that a liquid could change heat content,
but not temperature, as it converted to a solid or a gas.
Before formulating his theory of latent heat, Black had to
separate these two conflated concepts to remove the paradox
(Wiser & Carey 1983). Representational repair can also
move in the opposite direction: the conation of ?morning
star? and ?evening star? into ?Venus?, being one of the most
famous examples.    (037)

But such representational refinement is not a rare event
reserved to highly creative individuals; it's a commonplace
occurrence for all of us. Everyday we form new models to
describe current situations and solve new problems: from
making travel plans to understanding relationships with and
between newly met people. These models undergo constant
renement as we learn more about the situations and get
deeper into the problems.    (038)

Consider, for instance, the commonplace experience of
buying something from a coin-in-the-slot machine. Suppose
the item to be bought costs £2. Initially, we may believe that
having £2 in cash is a sufficient precondition for the buying
action. However, we soon learn to refine that precondition
to having £2 in coins --? the machine does not take notes.
When we try to use the coins we have, we must refine further
to exclude the new 50p coins --? the machine is old and has
not yet been updated to the new coin. But even some of the,
apparently legitimate, coins we have are rejected. Perhaps
they are too worn to be recognised by the machine. Later a
friend shows us that this machine will also accept some foreign
coins, which, apparently, it confuses with British ones.
Rening our preconditions to adapt them to the real world
of this machine does not just involve a change of belief.
We have to represent new concepts: ?coins excluding the new
50p?, ?coins that are not too worn to be accepted by this
particular machine?, ?foreign coins that will fool this machine?,
etc.    (039)

As another example, consider the experiment conducted by
Andreas diSessa on first-year MIT physics students (diSessa 1983).
The students were asked to imagine a situation in which a ball
is dropped from a height onto the floor.  Initially, the ball
has potential but not kinetic energy.  Just before it hits the
floor it has kinetic but not potential energy.  As it hits the
floor it has neither.  Where did the energy go?  The students
had trouble answering this question because they had idealised
the ball as a particle with mass but no extent.  To solve the
problem they had to refine their representation to give the ball
extent, so that the `missing' energy could be stored in the
deformation of the ball. Note that this requires a change in
the representation of the ball, not just a change of belief
about it.    (040)

The investigation of representational refinement has become
especially urgent due to the demand for autonomous,
interacting software agents, such as is envisaged in the
SemanticWeb (Berners-Lee, Hendler, & Lassila 2001). To enable
such interaction it is assumed that the agents will share a
common ontology. However, any experienced programmer
knows that perfect ontological agreement between very large
numbers of independently developed agents is unattainable.
Even if all the ontology developers download their ontologies
from the same server, they will do so at different times
and get slightly different versions of the ontology. They will
then tweak the initial ontologies to make them better suited
to their particular application. We might safely assume a
~90% agreement between any two agents, but there will always
be that ~10% disagreement and it will be a different 10% for
each pair. The technology we discuss below provides a partial
solution to just this problem.    (041)

Note that our proposal contrasts with previous approaches
to ontology mapping, merging or aligning. Our mechanism
does not assume complete access to all the ontologies
whose mismatches are to be resolved. Indeed, we argue that
complete access will often be unattainable for commericial
or technical reasons, e.g., because the ontologies are being
generated dynamically. Moreover, our mechanism doesn't
require an off-line alignment of these mismatching ontologies.
Rather, it tries to resolve the mismatches in a piecemeal
fashion, as they arise and with limited, run-time interaction
between the ontology owning agents. It patches the ontologies
only as much as is required to allow the agent interaction
to continue successfully. Our mechanism is aimed at ontologies
that are largely in agreement, e.g., different versions of
the same ontology, rather than aligning ontologies with a
completely different pedigree, which is the normal aim of
conventional ontology mapping. It works completely
automatically. This is essential to enable interacting agents
to resolve their ontological discrepancies during run-time
interactions. Again, this contrasts with conventional ontology
mapping, which often requires human interaction.    (042)


_________________________________________________________________
Message Archives: http://colab.cim3.net/forum/ontac-forum/
To Post: mailto:ontac-forum@xxxxxxxxxxxxxx
Subscribe/Unsubscribe/Config:
http://colab.cim3.net/mailman/listinfo/ontac-forum/
Shared Files: http://colab.cim3.net/file/work/SICoP/ontac/
Community Wiki:
http://colab.cim3.net/cgi-bin/wiki.pl?SICoP/OntologyTaxonomyCoordinatin
gWG    (043)

_________________________________________________________________
Message Archives: http://colab.cim3.net/forum/ontac-forum/
To Post: mailto:ontac-forum@xxxxxxxxxxxxxx
Subscribe/Unsubscribe/Config: 
http://colab.cim3.net/mailman/listinfo/ontac-forum/
Shared Files: http://colab.cim3.net/file/work/SICoP/ontac/
Community Wiki: 
http://colab.cim3.net/cgi-bin/wiki.pl?SICoP/OntologyTaxonomyCoordinatingWG    (044)
<Prev in Thread] Current Thread [Next in Thread>