Hi John; 
  
Reconciling upper ontologies is certainly 
not for the faint of heart.  I was not a trivial task to reconcile and 
combine the event models of HPKB Cyc and IODE.  But once I studied and 
deeply understood each model, I was able to make a small number of fixes to 
each artifact.  The models then slid together with a few taps of the 
mallet.  OK, it was actually a rather iterative process ;^) 
  
I find that the act of reconcilations is 
the ultimate test for correctness of the respective artifacts. 
  
I should mention that I did use some 
immuno-supressive drugs on Cyc.  I didn't have time to worry about its 
meta-types so I temporarily ripped them out.  So my merged artifact has the 
Cyc collections (classes) that I used as being all instances of some rather 
general IODE class (metaclass).  I certainly 
believe in the logical correctness of meta-types and hyper-meta-types (turtles 
all the way down), I just don't find them that useful for my present 
tasks. 
  
So my merging phylosophy 
involves starting by merging the parts that one really needs and "fixing" 
the nits in both artifacts that prevent merging.  One of the "fixes" was 
actually not a fix but a change of a temporal definition of one to agree with 
the other.  I made time points intervals in the IODE definitions.  
IODE had a perfectly reasonable defintion, but I would have to change much more 
in Cyc to get it to agree with IODE.  Ontlogy Works has been very 
cooperative and supportive of the effort. 
  
Note, please, that I am not merging to 
prove that we can all join hands and merge.  I simply need many more 
definitions than I presenly have and I find it very cost effective to grab 
definitions from Cyc.  I occasionally need to fix them.  This upper 
ontology allignment is simply a means of making it easier to import Cyc content 
into IODE. 
  
  
Best, 
  
-Eric Peterson 
  
  
  
  
  
   
 
  
From: ontac-forum-bounces@xxxxxxxxxxxxxx on 
behalf of John F. Sowa Sent: Sat 5/13/2006 11:07 PM To: 
ontac-forum@xxxxxxxxxxxxxx Subject: [ontac-forum] Problems of 
ontology
  
Folks,
  I recently returned from the FLAIRS-06 conference, 
where Alan Bundy gave a talk that supports many of the points I have been 
trying to make in various discussions:  A single, unified upper ontology 
is impossible to achieve, and it's not necessary for 
interoperability.
  As an example of a contrary view, I'll quote some old 
email from March 2:
  Mike Uschold:
   > The issue of 
import is whether we can agree on some variation of:  >  > 
"A common upper ontology is essential for achieving affordable and  > 
scalable semantic interoperability.  Summit participants will 
explore  > alternative approaches to developing or establishing this 
common upper  > ontology."  >  > My original 
comment was: I cannot endorse this statement for two  > 
reasons.  >  > 1. I don't know that it is 
'essential'.  > 2. I don't believe is possible to have a single 
CUO.
  Nicola Guarino:
   > I agree, although, if it's pretty 
clear that an ULO is not essential  > for semantic interoperability, 
it *seems* indeed essential for  > *affordable and scalable* semantic 
interoperability. The latter is  > hard to prove, however.
  I 
think the reason why it's hard to prove is that it's false.
  People have 
been building interoperable computer systems for the past 50 years (and for 
many centuries before that, people have been using interoperable systems for 
banking, plumbing, railroads, telegraph, electrical transmission, 
etc.).  None of them have ever had agreement on anything more than the 
local task-oriented aspects -- i.e., the details of the message formats, the 
signals that are transmitted, or the size and shape of the connectors (e.g., 
the plumbing or the railroad track).
  Following is the abstract and 
introduction of Bundy's paper.
  John 
Sowa ____________________________________________________________________
  On 
Repairing Reasoning Reversals via Representational Refinements
  Alan 
Bundy, Fiona McNeill and Chris Walton
  Abstract
  Representation is a 
fluent.  A mismatch between the real world and an agent's representation 
of it can be signalled by unexpected failures (or successes) of the agent's 
reasoning. The `real world' may include the ontologies of other 
agents. Such mismatches can be repaired by refining or abstracting an 
agent's ontology. These refinements or abstractions may not be limited to 
changes of belief, but may also change the signature of the agent's ontology. 
We describe the implementation and successful evaluation of these ideas in 
the ORS system. ORS diagnoses failures in plan execution and then repairs 
the faulty ontologies. Our automated approach to dynamic ontology repair 
has been designed specifically to address real issues in multi-agent systems, 
for instance, as envisaged in the Semantic 
Web.
  Introduction
  The first author [AB] has a vivid memory of his 
introductory applied mathematics lecture during his first year at 
university. The lecturer delivered a sermon designed to rid the 
incoming students of a heresy. This heresy was to entertain a vision of a 
complete mathematical model of the world. The lecturer correctly prophesied 
that the students were dissatis- fied with the patent inadequacies of the 
mathematical models they had learnt at school and impatient, now they had 
arrived in the adult university world, to learn about sophisticated models 
that were free of caveats such as treating the weight of the string as 
negligible or ignoring the friction of the pulley
  They were to be 
disappointed. Complete mathematical models of the real world were 
unattainable, because it was infinitely rich. Deciding which elements of the 
world were to be modelled and which could be safely ignored was the very 
essence of applied mathematics. It was a skill that students had to learn ?-- 
not one that they could render redundant by modelling everything.
  This 
all now seems obvious. AB is surprised at the naivety of his younger self ? 
since, before the sermon, he certainly was guilty of this very heresy. But it 
seems this lesson needs to be constantly relearnt by the AI community. We 
too model the real world, for instance, with symbolic representa- tions of 
common-sense knowledge. We too become impatient with the inadequacies of our 
models and strive to enrich them. We too dream of a complete model of 
common-sense knowledge and even aim to implement such a model, cf. the Cyc 
Project. But even Cycorp is learning to cure itself of this heresy, by 
tailoring particular knowledge bases to particular applications, underpinned 
by a common core.
  If we accept the need to free ourselves of this heresy 
and accept that knowledge bases only need to be good enough for their 
application, then there is a corollary that we must also accept: the need for 
the knowledge in them to be fluent, i.e., to change during its use. And, of 
course, we do accept this corollary. We build adaptive systems that learn to 
tailor their behaviour to a user or improve their capabilities over time. 
We have belief-revision mechanisms, such as truth maintenance (Doyle 1979), 
that add and remove knowledge from the knowledge base.
  However, it is 
the thesis of this paper that none of this goes far enough. In addition, we 
must consider the dynamic evolution of the underlying formalism in which the 
knowledge is represented. To be concrete, in a logic-based representation the 
predicates and functions, their arities and their types, may all need 
to change during the course of reasoning.
  Once you start looking, 
human common-sense reasoning is full of examples of this requirement. 
Consider, for instance, Joseph Black's discovery of latent heat. 
Before Black, the concepts of heat and temperature were conflated. It was 
thus a paradox that a liquid could change heat content, but not temperature, 
as it converted to a solid or a gas. Before formulating his theory of latent 
heat, Black had to separate these two conflated concepts to remove the 
paradox (Wiser & Carey 1983). Representational repair can also move in 
the opposite direction: the conation of ?morning star? and ?evening star? 
into ?Venus?, being one of the most famous examples.
  But such 
representational refinement is not a rare event reserved to highly creative 
individuals; it's a commonplace occurrence for all of us. Everyday we form 
new models to describe current situations and solve new problems: 
from making travel plans to understanding relationships with and between 
newly met people. These models undergo constant renement as we learn more 
about the situations and get deeper into the problems.
  Consider, for 
instance, the commonplace experience of buying something from a 
coin-in-the-slot machine. Suppose the item to be bought costs £2. Initially, 
we may believe that having £2 in cash is a sufficient precondition for the 
buying action. However, we soon learn to refine that precondition to 
having £2 in coins --? the machine does not take notes. When we try to use 
the coins we have, we must refine further to exclude the new 50p coins --? 
the machine is old and has not yet been updated to the new coin. But even 
some of the, apparently legitimate, coins we have are rejected. 
Perhaps they are too worn to be recognised by the machine. Later a friend 
shows us that this machine will also accept some foreign coins, which, 
apparently, it confuses with British ones. Rening our preconditions to adapt 
them to the real world of this machine does not just involve a change of 
belief. We have to represent new concepts: ?coins excluding the new 50p?, 
?coins that are not too worn to be accepted by this particular machine?, 
?foreign coins that will fool this machine?, etc.
  As another example, 
consider the experiment conducted by Andreas diSessa on first-year MIT 
physics students (diSessa 1983). The students were asked to imagine a 
situation in which a ball is dropped from a height onto the floor.  
Initially, the ball has potential but not kinetic energy.  Just before 
it hits the floor it has kinetic but not potential energy.  As it hits 
the floor it has neither.  Where did the energy go?  The 
students had trouble answering this question because they had 
idealised the ball as a particle with mass but no extent.  To solve 
the problem they had to refine their representation to give the 
ball extent, so that the `missing' energy could be stored in 
the deformation of the ball. Note that this requires a change in the 
representation of the ball, not just a change of belief about it.
  The 
investigation of representational refinement has become especially urgent due 
to the demand for autonomous, interacting software agents, such as is 
envisaged in the SemanticWeb (Berners-Lee, Hendler, & Lassila 2001). To 
enable such interaction it is assumed that the agents will share a common 
ontology. However, any experienced programmer knows that perfect ontological 
agreement between very large numbers of independently developed agents is 
unattainable. Even if all the ontology developers download their 
ontologies from the same server, they will do so at different times and 
get slightly different versions of the ontology. They will then tweak the 
initial ontologies to make them better suited to their particular 
application. We might safely assume a ~90% agreement between any two agents, 
but there will always be that ~10% disagreement and it will be a different 
10% for each pair. The technology we discuss below provides a 
partial solution to just this problem.
  Note that our proposal 
contrasts with previous approaches to ontology mapping, merging or aligning. 
Our mechanism does not assume complete access to all the ontologies whose 
mismatches are to be resolved. Indeed, we argue that complete access will 
often be unattainable for commericial or technical reasons, e.g., because the 
ontologies are being generated dynamically. Moreover, our mechanism 
doesn't require an off-line alignment of these mismatching 
ontologies. Rather, it tries to resolve the mismatches in a 
piecemeal fashion, as they arise and with limited, run-time 
interaction between the ontology owning agents. It patches the 
ontologies only as much as is required to allow the agent interaction to 
continue successfully. Our mechanism is aimed at ontologies that are largely 
in agreement, e.g., different versions of the same ontology, rather than 
aligning ontologies with a completely different pedigree, which is the normal 
aim of conventional ontology mapping. It works completely automatically. 
This is essential to enable interacting agents to resolve their ontological 
discrepancies during run-time interactions. Again, this contrasts with 
conventional ontology mapping, which often requires human 
interaction.
 
  _________________________________________________________________ Message 
Archives: http://colab.cim3.net/forum/ontac-forum/ To 
Post: mailto:ontac-forum@xxxxxxxxxxxxxx Subscribe/Unsubscribe/Config: 
http://colab.cim3.net/mailman/listinfo/ontac-forum/ Shared 
Files: http://colab.cim3.net/file/work/SICoP/ontac/ Community 
Wiki: http://colab.cim3.net/cgi-bin/wiki.pl?SICoP/OntologyTaxonomyCoordinatingWG
   
_________________________________________________________________
Message Archives: http://colab.cim3.net/forum/ontac-forum/
To Post: mailto:ontac-forum@xxxxxxxxxxxxxx
Subscribe/Unsubscribe/Config: 
http://colab.cim3.net/mailman/listinfo/ontac-forum/
Shared Files: http://colab.cim3.net/file/work/SICoP/ontac/
Community Wiki: 
http://colab.cim3.net/cgi-bin/wiki.pl?SICoP/OntologyTaxonomyCoordinatingWG    (01)
 
 |