Folks, (01)
I recently returned from the FLAIRS-06 conference, where Alan Bundy
gave a talk that supports many of the points I have been trying to
make in various discussions: A single, unified upper ontology is
impossible to achieve, and it's not necessary for interoperability. (02)
As an example of a contrary view, I'll quote some old email from
March 2: (03)
Mike Uschold: (04)
> The issue of import is whether we can agree on some variation of:
>
> "A common upper ontology is essential for achieving affordable and
> scalable semantic interoperability. Summit participants will explore
> alternative approaches to developing or establishing this common upper
> ontology."
>
> My original comment was: I cannot endorse this statement for two
> reasons.
>
> 1. I don't know that it is 'essential'.
> 2. I don't believe is possible to have a single CUO. (05)
Nicola Guarino: (06)
> I agree, although, if it's pretty clear that an ULO is not essential
> for semantic interoperability, it *seems* indeed essential for
> *affordable and scalable* semantic interoperability. The latter is
> hard to prove, however. (07)
I think the reason why it's hard to prove is that it's false. (08)
People have been building interoperable computer systems for the
past 50 years (and for many centuries before that, people have
been using interoperable systems for banking, plumbing, railroads,
telegraph, electrical transmission, etc.). None of them have ever
had agreement on anything more than the local task-oriented aspects
-- i.e., the details of the message formats, the signals that are
transmitted, or the size and shape of the connectors (e.g., the
plumbing or the railroad track). (09)
Following is the abstract and introduction of Bundy's paper. (010)
John Sowa
____________________________________________________________________ (011)
On Repairing Reasoning Reversals via Representational Refinements (012)
Alan Bundy, Fiona McNeill and Chris Walton (013)
Abstract (014)
Representation is a fluent. A mismatch between the real
world and an agent's representation of it can be signalled by
unexpected failures (or successes) of the agent's reasoning.
The `real world' may include the ontologies of other agents.
Such mismatches can be repaired by refining or abstracting
an agent's ontology. These refinements or abstractions may
not be limited to changes of belief, but may also change the
signature of the agent's ontology. We describe the implementation
and successful evaluation of these ideas in the ORS system.
ORS diagnoses failures in plan execution and then repairs the
faulty ontologies. Our automated approach to dynamic ontology
repair has been designed specifically to address real issues
in multi-agent systems, for instance, as envisaged in the
Semantic Web. (015)
Introduction (016)
The first author [AB] has a vivid memory of his introductory
applied mathematics lecture during his first year at university.
The lecturer delivered a sermon designed to rid the incoming
students of a heresy. This heresy was to entertain a
vision of a complete mathematical model of the world. The
lecturer correctly prophesied that the students were dissatis-
fied with the patent inadequacies of the mathematical models
they had learnt at school and impatient, now they had arrived
in the adult university world, to learn about sophisticated
models that were free of caveats such as treating the weight
of the string as negligible or ignoring the friction of
the pulley (017)
They were to be disappointed. Complete mathematical
models of the real world were unattainable, because it was
infinitely rich. Deciding which elements of the world were
to be modelled and which could be safely ignored was the
very essence of applied mathematics. It was a skill that
students had to learn ?-- not one that they could render
redundant by modelling everything. (018)
This all now seems obvious. AB is surprised at the naivety
of his younger self ? since, before the sermon, he certainly
was guilty of this very heresy. But it seems this lesson
needs to be constantly relearnt by the AI community. We too
model the real world, for instance, with symbolic representa-
tions of common-sense knowledge. We too become impatient
with the inadequacies of our models and strive to enrich
them. We too dream of a complete model of common-sense
knowledge and even aim to implement such a model,
cf. the Cyc Project. But even Cycorp is learning to cure
itself of this heresy, by tailoring particular knowledge bases
to particular applications, underpinned by a common core. (019)
If we accept the need to free ourselves of this heresy and
accept that knowledge bases only need to be good enough
for their application, then there is a corollary that we must
also accept: the need for the knowledge in them to be fluent,
i.e., to change during its use. And, of course, we do accept
this corollary. We build adaptive systems that learn to tailor
their behaviour to a user or improve their capabilities over
time. We have belief-revision mechanisms, such as truth
maintenance (Doyle 1979), that add and remove knowledge
from the knowledge base. (020)
However, it is the thesis of this paper that none of this goes
far enough. In addition, we must consider the dynamic evolution
of the underlying formalism in which the knowledge is represented.
To be concrete, in a logic-based representation the predicates
and functions, their arities and their types, may all need to
change during the course of reasoning. (021)
Once you start looking, human common-sense reasoning
is full of examples of this requirement. Consider, for
instance, Joseph Black's discovery of latent heat. Before
Black, the concepts of heat and temperature were conflated.
It was thus a paradox that a liquid could change heat content,
but not temperature, as it converted to a solid or a gas.
Before formulating his theory of latent heat, Black had to
separate these two conflated concepts to remove the paradox
(Wiser & Carey 1983). Representational repair can also
move in the opposite direction: the conation of ?morning
star? and ?evening star? into ?Venus?, being one of the most
famous examples. (022)
But such representational refinement is not a rare event
reserved to highly creative individuals; it's a commonplace
occurrence for all of us. Everyday we form new models to
describe current situations and solve new problems: from
making travel plans to understanding relationships with and
between newly met people. These models undergo constant
renement as we learn more about the situations and get
deeper into the problems. (023)
Consider, for instance, the commonplace experience of
buying something from a coin-in-the-slot machine. Suppose
the item to be bought costs £2. Initially, we may believe that
having £2 in cash is a sufficient precondition for the buying
action. However, we soon learn to refine that precondition
to having £2 in coins --? the machine does not take notes.
When we try to use the coins we have, we must refine further
to exclude the new 50p coins --? the machine is old and has
not yet been updated to the new coin. But even some of the,
apparently legitimate, coins we have are rejected. Perhaps
they are too worn to be recognised by the machine. Later a
friend shows us that this machine will also accept some foreign
coins, which, apparently, it confuses with British ones.
Rening our preconditions to adapt them to the real world
of this machine does not just involve a change of belief.
We have to represent new concepts: ?coins excluding the new
50p?, ?coins that are not too worn to be accepted by this
particular machine?, ?foreign coins that will fool this machine?,
etc. (024)
As another example, consider the experiment conducted by
Andreas diSessa on first-year MIT physics students (diSessa 1983).
The students were asked to imagine a situation in which a ball
is dropped from a height onto the floor. Initially, the ball
has potential but not kinetic energy. Just before it hits the
floor it has kinetic but not potential energy. As it hits the
floor it has neither. Where did the energy go? The students
had trouble answering this question because they had idealised
the ball as a particle with mass but no extent. To solve the
problem they had to refine their representation to give the ball
extent, so that the `missing' energy could be stored in the
deformation of the ball. Note that this requires a change in
the representation of the ball, not just a change of belief
about it. (025)
The investigation of representational refinement has become
especially urgent due to the demand for autonomous,
interacting software agents, such as is envisaged in the
SemanticWeb (Berners-Lee, Hendler, & Lassila 2001). To enable
such interaction it is assumed that the agents will share a
common ontology. However, any experienced programmer
knows that perfect ontological agreement between very large
numbers of independently developed agents is unattainable.
Even if all the ontology developers download their ontologies
from the same server, they will do so at different times
and get slightly different versions of the ontology. They will
then tweak the initial ontologies to make them better suited
to their particular application. We might safely assume a
~90% agreement between any two agents, but there will always
be that ~10% disagreement and it will be a different 10% for
each pair. The technology we discuss below provides a partial
solution to just this problem. (026)
Note that our proposal contrasts with previous approaches
to ontology mapping, merging or aligning. Our mechanism
does not assume complete access to all the ontologies
whose mismatches are to be resolved. Indeed, we argue that
complete access will often be unattainable for commericial
or technical reasons, e.g., because the ontologies are being
generated dynamically. Moreover, our mechanism doesn't
require an off-line alignment of these mismatching ontologies.
Rather, it tries to resolve the mismatches in a piecemeal
fashion, as they arise and with limited, run-time interaction
between the ontology owning agents. It patches the ontologies
only as much as is required to allow the agent interaction
to continue successfully. Our mechanism is aimed at ontologies
that are largely in agreement, e.g., different versions of
the same ontology, rather than aligning ontologies with a
completely different pedigree, which is the normal aim of
conventional ontology mapping. It works completely
automatically. This is essential to enable interacting agents
to resolve their ontological discrepancies during run-time
interactions. Again, this contrasts with conventional ontology
mapping, which often requires human interaction. (027)
_________________________________________________________________
Message Archives: http://colab.cim3.net/forum/ontac-forum/
To Post: mailto:ontac-forum@xxxxxxxxxxxxxx
Subscribe/Unsubscribe/Config:
http://colab.cim3.net/mailman/listinfo/ontac-forum/
Shared Files: http://colab.cim3.net/file/work/SICoP/ontac/
Community Wiki:
http://colab.cim3.net/cgi-bin/wiki.pl?SICoP/OntologyTaxonomyCoordinatingWG (028)
|