[Top] [All Lists]

Re: [ontac-forum] Surveyed Ontology "Library" Systems -- parts

To: ONTAC-WG General Discussion <ontac-forum@xxxxxxxxxxxxxx>
Cc: 'Arun Majumdar' <arun@xxxxxxxxxxxx>
From: "John F. Sowa" <sowa@xxxxxxxxxxx>
Date: Tue, 01 Nov 2005 15:04:03 -0500
Message-id: <4367CA33.60902@xxxxxxxxxxx>
Barry,    (01)

I would be the first to admit that the issues raised
in this thread are extremely difficult to address
in general, but they are also extremely important.    (02)

I wrote a paper a while ago to point out how difficult
they are and how much money has been spent on them
so far without reaching a satisfactory solution:    (03)

    The Challenge of Knowledge Soup    (04)

Twenty years ago, these problems were recognized as
important, and people like Doug Lenat (and his funders,
which included DARPA and other Beltway agencies) had
assumed many of them could be solved in 5 or 10 years.
But it's much harder than anyone had imagined in 1985.    (05)

JS>> The fact that they haven't yet been defined in terms
 >> of the CL semantics does not mean that they couldn't
 >> be so defined.  I strongly suspect that they could be.    (06)

BS> Unfortunately (in the case of the FMA, at least) only
 > with a huge amount of work.    (07)

Translating from one formalism to another is never easy.
That is why the Common Logic project was started:  it is
easier to define N transformations from N languages to 1
than to define N-squared transformations from N to N.    (08)

If each translation can be specified once and tools can be
written to automate the translation, the cost of the initial
specification becomes almost trivial in comparison.    (09)

JS>> It enables theories that use different choices of names
 >> to be related to one another by specifying the name maps.    (010)

BS> Good. Who will specify them? Who will check them? Who will
 > deal with all those many cases where seemingly identical terms
 > in different namespaces have either no definitions or logically
 > incompatible definitions?    (011)

There is no question that is a lot of costly work.  But there
are tools that have been used to address such issues.  In
particular, I suggest the example of the legacy re-engineering
problem that Arun Majumdar and Andre Leclerc addressed.  A major
consulting firm estimated that the task, if done by hand, would
take 40 people two years -- a total of 80 person-years of work.    (012)

Before undertaking that project, the client wanted a second
opinion.  They hired Arun and Andre for a 6-week study project.
Instead of studying it, they solved the problem:  analyzed the
relationships among 40 years of programs and English documentation
about the programs, determined the name maps, discovering many
errors and inconsistencies that had gone undetected for many
years, and produced one CD-ROM with exactly what the client
had wanted:  a data dictionary for the computer, an English
glossary for the humans, and UML diagrams that showed how all
the programs and data were interrelated.    (013)

That took two people 6-weeks -- 12 person weeks instead of
80 person years.  The client still had more work to do, but
that CD-ROM was what they need to start.  Following is a paper
that discusses that example, among others.  In the bibliography
is a reference to a paper by Andre and Arun with more details:    (014)

    Analogical Reasoning    (015)

BS> ... but attempts to establish CL-compatible logical formalisms
 > and name maps e.g. in domains like anatomy (where considerably
 > energy has been invested) have thus far been less than impressively
 > successful.    (016)

Logical formalisms and ontologies are two separate undertakings.
Without a common logic, ontology alignment is impossible.
With a common logic, it is merely very difficult.  You need both
the common logic and the automated or at least semi-automated
tools.  The project by Andre and Arun was semi-automated in the
sense that the computer came back to ask them questions when it
found discrepancies it could not resolve.  That is an important
advantage of semi-automated methods:  computers are much better
than people in detecting inconsistencies, but they are not as
good as people in deciding what to do about them.    (017)

John Sowa    (018)

Message Archives: http://colab.cim3.net/forum/ontac-forum/
To Post: mailto:ontac-forum@xxxxxxxxxxxxxx
Shared Files: http://colab.cim3.net/file/work/SICoP/ontac/
Community Wiki: 
http://colab.cim3.net/cgi-bin/wiki.pl?SICoP/OntologyTaxonomyCoordinatingWG    (019)
<Prev in Thread] Current Thread [Next in Thread>