ontac-forum
[Top] [All Lists]

Re: [ontac-forum] Problems of ontology

To: Niemann.Brand@xxxxxxxxxxxxxxx
Cc: ONTAC-WG General Discussion <ontac-forum@xxxxxxxxxxxxxx>
From: "John F. Sowa" <sowa@xxxxxxxxxxx>
Date: Sun, 14 May 2006 17:31:26 -0400
Message-id: <4467A1AE.9080503@xxxxxxxxxxx>
Brand,    (01)

I'm glad that you liked the comments.    (02)

 > Thank you for your succinct conclusion.    (03)

But I would also like to emphasize that I believe
research on ontology is important.  And I would
mention Cyc as an example of a very large system
that does have an upper ontology.  But the director,
Doug Lenat, has made the following points:    (04)

  1. The upper ontology is the *least* important part.    (05)

  2. The middle-levels of the ontology contain the most
     heavily used concepts.    (06)

  3. And for any particular problem or task, most of the
     detailed reasoning is done at the lowest level of
     _microtheories_, of which there were 6,000 in 2004.    (07)

These three points, together with other studies, such
as Alan Bundy's paper or my paper on knowledge soup,
indicate that the most important work on making
systems interoperate depends on the categories and
axioms at the lower levels, not the upper levels.    (08)

Bundy also made the point that most private companies
and many government agencies consider their ontologies
to be trade secrets or confidential material that they
have no intention of divulging or sharing with anybody
else.  That provides further incentive to focus on the
methods that address task-oriented revisions, rather
than global alignments.    (09)

In my previous note, I included the introduction to
Bundy's paper.  Following is the conclusion.    (010)

John
_______________________________________________________    (011)

Conclusion    (012)

We have argued that representation is a fluent in commonsense
reasoning, and that repairs to and evolution of representation
is an everyday event. If intelligent agents are to conduct
commonsense reasoning, it will be necessary to build automated
reasoning systems in which the representation can evolve
dynamically in reaction to unexpected failures and successes
of reasoning, and to other triggers yet to be explored.  In
particular, such functionality will be an essential ingredient
in interacting, peer-to-peer multi-agent systems, such as are
envisaged in the Semantic Web.  Agents will need to be able
to cope dynamically with minor variations between their
ontologies.    (013)

We have initiated research into automatic dynamic ontology
evolution. The ORS system operates in a planning domain
over KIF ontologies, where failures in plan execution
trigger repairs to the planning agent's ontology. Despite
the simplifying assumptions and limited functionality of this
prototype system, it can account for over a third of the
ontological mismatches between different versions of several
popular and widely used ontologies.  More details about this
work can be found in (McNeill 2005).    (014)

Further work is clearly required to lift the current limitations
of this work:  removing the simplifying assumptions about
agents; extending it to other kinds of ontology such as those
based on description logics; extending the kinds of mismatch
and repair triggers it can deal with; applying it to non-
planning domains; and implementing it within a fully
peer-to-peer architecture, in which all agents are forming
plans and repairing their ontologies. We are currently engaged
on this further work within the EU Open Knowledge Project.    (015)


_________________________________________________________________
Message Archives: http://colab.cim3.net/forum/ontac-forum/
To Post: mailto:ontac-forum@xxxxxxxxxxxxxx
Subscribe/Unsubscribe/Config: 
http://colab.cim3.net/mailman/listinfo/ontac-forum/
Shared Files: http://colab.cim3.net/file/work/SICoP/ontac/
Community Wiki: 
http://colab.cim3.net/cgi-bin/wiki.pl?SICoP/OntologyTaxonomyCoordinatingWG    (016)
<Prev in Thread] Current Thread [Next in Thread>