[Top] [All Lists]

[ontac-forum] turning to a sociological framework

To: ontac-forum@xxxxxxxxxxxxxx
From: Ken Ewell <mitioke@xxxxxxxxxxxx>
Date: Fri, 13 Jan 2006 00:53:57 -0500
Message-id: <43C74075.2020503@xxxxxxxxxxxx>
Gentlemen,    (01)

Patrick Cassidy wrote:    (02)

    "Our problem in developing a COSMO is sociological"    (03)

I have already proposed our semantic model to some members of the 
COSMO-WG.  I feel I should say something more about it at this time 
where some people have pointed out that we need to effectively 
understand interaction and interconnectedness.      (04)

 From our perspective, the focus should be taken off of objects, 
entities, representations or specifications and theories for reasoning, 
and shifted to a meta-framework for interoperability.  It's not about 
signs or objects, for example, and its not just about sharing 
information about them or representing their properties.  It's about 
controlling semiotic (or purposeful) systems of processes and events. I 
am making a sociological and concrete proposal and I want to be brief, 
so I won't get philosophical and I might be a little blunt.    (05)

Human understanding is enhanced by knowing (in principle) how to harness 
the inheritance and precedence factors and other control mechanisms from 
the significant objects (or signs of them) that exert or maintain their 
control over things or others throughout time and space. Such an 
understanding can be applied to system architecture and design to 
control the objects, entities and interactions of any system    (06)

To illustrate this concept, let's consider a play of Shakespeare as Adam 
Please did for SUMO and the IEEE.  I remember learning of this idea from 
a handbook about physics from a Russian author.  I wish I could recall 
the book and author but it is long past the length of my memory.   Tom 
Adi gave me the book so I could try to understand the nature of a simple 
hydrogen atom as a system.  Most of the book was elementary physics 
explained in clear language. There was also a good deal of metaphysics 
and I can paraphrase this idea from a passage in the book that did grab 
my attention.    (07)

The famous and centuries-old play, Hamlet, can be produced on the stage 
and on the screen.  One can only imagine the intent in Shakespeare's 
mind and the dream in his heart when he wrote the script of the play and 
assigned the roles and characters.  But we can be rather sure that he 
intended it to be interpreted as a tragedy rather than a comedy.  How is 
he able to guarantee his message and its timelessness through countless 
interpretations-- both good and bad ones at that?    (08)

Mel Gibson's movie version comes to mind.  He had no problem placing the 
scenes at the sea and at the castle with his multi-million dollar movie 
budget.  All the actors knew the script, and the play, and delivered 
their lines in practiced meter.  W.S. might have been very pleased with 
the outcome. I saw this movie and enjoyed the actors and the rendition.  
It was easy to interpret it as the great human tragedy that it 
represents; the semiotics of the script and the screen play and the 
scenes and acts themselves were in perfect harmony.  I felt there was a 
kind of fidelity to the author and the classical play.    (09)

Consider now, the play on the stage where the scenes are painted on 
canvas. There are props and implements to set the stage.  If the props 
are well-matched and the troupe is polished and professional, a 
resonating event unfolds.  Consider the nature of this relationship 
between the stage and the players, the script of the play and its 
delivery to the audience. And consider also their capacities, and how 
the audience can take the scenery of the stage while knowing the script 
in general, and can more or less readily interpret it as the great human 
tragedy that it represents.      (010)

Now consider the high-school play where students play Hamlet and the 
Queen and other characters, often according to an abridged script, with 
props that only superficially represent the play and its implements.  
Still, on the good act, one can come away with a fair interpretation.  
It can only be judged fair as a consequence of the interpretation.  This 
in turn is a consequence of the harmony, fidelity and other relations 
(or lack thereof) that unfold in the course of the play as it unfolds on 
the stage.    (011)

Consider the grade-school play where children who barely understand the 
plot and meaning are set on a stage, where a sign adorns the place where 
the castle is, and a another sign in blue indicates where the sea should 
be.  Here, the young actors, who barely know how to act, deliver their 
lines with such uncertainty that the meaning of the great tragedy can be 
lost to other interpretations and considerations of the current event.      (012)

And consider, finally, the extreme case where there is a pyramid where 
the castle should be and a modern city scape where the sea should be. 
And the actors, instead of delivering the lines of the script to Hamlet, 
do it poorly, or abandoned the script entirely.    (013)

In such a case the symmetries that should exist between the basic 
elements of the stage and the script and the play, are broken.  As a 
consequence, the observer or audience may not get anything related to 
the tragedy at all. They might become confused or lose contact with the 
central meaning intended to be delivered by the author, director and 
actors and scenery.  The relations between elements known to have been 
assigned, and those that are actually manifested, are mangled and 
confused--- the script being abandoned, the nexus of the language of the 
play being absent-- results in the meaning of the great human tragedy 
being lost.  A physical law is broken, the law of conservation is 
violated-- the law of the conservation of the art.      (014)

Now consider this concrete and formal framework: according to the need 
for an interconnected framework for preserving (and measuring)  
sociological elements of meaning. We have only a few axioms to propose.  
These axioms are neutral with respect to any specified ontology and also 
to methods of inferencing. Here are the semiotics of interpersonal 
(interacting) objects:    (015)

 From the Adi theory of (natural language) semantics (axiom1)    (016)

There is a set T = {closed, open} containing two abstract objects 
representing symmetrical boundary conditions, and there is a set G = 
{self, others} containing two abstract objects representing symmetrical 
engagement conditions, such that the product of the two symmetry sets 
(the supersymmetry set) R = T x G = { r(j) | j = 1 to 4 } = { (closed, 
self), (open, self), (closed, others), (open, others) } or using a 
shorthand notation = {inward, outward, engaged, unengaged} is a set of 
abstract object pairs that represent polarities.    (017)

 From the Adi theory of (natural language) semantics (axiom 2)    (018)

There is a set P = { p(i) | i = 1, 2, 3 } = {assignment, manifestation, 
containment} containing three abstract objects representing all types of 
processes. For convenience, we write pi for p(i) and enumerate the power 
set P* = {s(i) | i = 1 to 8} = { {}, {p1}, {p2}, {p3}, {p1, p2}, {p1, 
p3}, {p2, p3}, {p1, p2, p3} }.    (019)

These two sets are bijective and they form a symmetrical framework and 
foundation for identifying, valuing and measuring the semantics of a 
message or a text, using the objects or specified entities of various 
ontologies or just the signs recognized from natural language vocabulary.      (020)

Later I will show how Adi's theory provides the means to form tipples 
for inferencing with the mappings between various processes and/or 
objects that are represented by signs in a natural language and as 
functional structures in a computer.  .    (021)

Adi's theories are  based upon inductive reasoning that is completely 
spelled out in a chapter of a forthcoming book.   Some members of this 
group also have copies of this work.  Basically it ties up language and 
words and abstract concepts of the mind into a framework for valuing, 
measuring and interpreting the coded representations and structures of 
language stored in the memory of computing machines.  The abstract 
structures of consonants and of word roots were found by induction over 
word root interpretations from studies of natural languages.    (022)

Fundamentally, a specific sound made by a consonant or one or more 
phonemes in a natural language (Arabic, English, etc.) is ontologically 
mapped by the frames of a matrix that results from the bijective 
organization of the symmetrical boundary and engagement conditions, or 
polarities, onto the categories of the power set P* enumerated above.      (023)

Each sound or phoneme (sh/ti, ch, gh/f/ph,  ck, b, r, etc.) is mapped to 
a specific cell that is an intersection of a polarity over one or more 
interacting processes (perceived as abstract ontic objects).  
Computationally, we can refer to any pair of these abstract objects with 
row and column numbers (e.g, i and y are mapped to row 0, column 1) and 
can be referred to by their cell coordinates {0,1}, an ordered pair, and 
also as an ontological mapping that is a part of a function to a 
computer program.      (024)

In addition some sounds combine with one another in a regular and 
reoccurring way and some sounds never go together.  To gain some 
certainty in the imprecise and error prone modern natural languages we 
wanted to process, we studied the regular combinations of sounds in root 
word forms that have a very long history.  There is a long story here 
that I will just leave mainly out for brevity-- the grist of which is 
that words that have a long history tend to be more concrete in meaning 
and patterns of use.  They also serve as  semantic cover over a range of 
natural language vocabulary --sort of how Longman's dictionary for 
ESL-students organizes tens of thousands of English terms with a smaller 
set of about 2000 so-called concepts.    (025)

What we end up with is a formal framework for mapping the individual 
structure of primitive and regularly occurring signs used in language 
(and in its vocabulary) to abstract concepts of the mind used for 
interpersonal communication about and control over real things in the world.    (026)

To understand these functions computationally, we begin with a small 
taxonomy of ontological mappings.  A mathematical mapping has one of two 
equivalent forms    (027)

  1. f : X ==> Y where X and Y are sets
  2. f(x) = y where x is an element of set X and y is an element of set Y    (028)

The mapping f connects elements of the domain X to elements of the range Y.    (029)

Ontological mappings and their domains and ranges involve seven types of 
processes (from Adi's second axiom (the power set P* introduced above)    (030)

 1. assignment
 2. manifestation
 3. containment
 4. assignment of manifestation
 5. assignment of containment
 6. manifestation of containment
 7. assignment of manifestation and containment    (031)

and each process has one of four polarities (Adi's first axiom)    (032)

 1. inward
 2. outward
 3. engaged
 4. separate    (033)

A mapping is also called a function.  Ontological mappings could also be 
called ontological functions.   There are four main types of ontological 
functions (corresponding to speech acts) that can be identified and 
measured with this framework.    (034)

 1. Action fij ( xkm )
 2. Interaction fij : Xkm ==> Ynq
 3. Composition fij ( gip ( ) )
     double composition fij ( gip (hiw ( ) ) )
 4. Composite action fij ( gip ( xkm ) )
  where i, k, and n identify the process type number 1 to 7
  and j, m, p, q and w identify the polarity number 1 to 4    (035)

Only "interaction" is a complete mapping in the sense that it has a 
domain and a range.  The other three types of ontological mappings are 
incomplete in the sense that the domain and/or range may be missing.    (036)

"Action" is an ontological mapping where the range (Ynq) is not defined. 
This may be interpreted in different ways. For example, we can assume 
that the range is identical to the domain (xkm). We can also assume that 
the range is something we need to look for, or anything we want.    (037)

"Composition" is identical to the mathematical composition of two 
mappings. The range of the inner mapping gip serves as the domain of the 
outer mapping fij. Composed mappings must share the process type i. 
There are a few ontological mappings with double composition (three 
mappings combined).  Compositions neither have a domain nor a range. 
They are like procedures that can be applied to all types of things.    (038)

"Composite action" is a "composition" with a domain. Two mappings of the 
same process type are composed and are acting on the domain elements xkm.    (039)

By way of showing how powerful this method is at gaining insight, I 
relate the following to you.    (040)

In one recent study we compiled statistics of the distribution of 
ontological functions over about 30,000 frequently-used words of the 
English language. We found out many interesting things, not the least of 
which is: there is an absence of action and interaction by containment 
(there is no containment mapping applied to a defined domain set) in any 
of the vocabulary we tested. We interpreted it as a natural law of systems.    (041)

As a law of system control-- there is no direct control. No process or 
object can directly control (exercise a mapping of containment on) 
another process or object. Control of others (other interacting objects) 
is either done by assignment (control by instruction, the most common 
form) or by manifestation (control by action causing a reaction).    (042)

We found that the great majority of interactions (925/991 or about 93% 
of the vocabulary falling into this group) is done by assignment, i.e. 
by issuing instructions that others execute (machine control, obedience, 
cooperation in good faith) and a small percentage (66/991 or about 7%) 
of interactions is done by reaction (imitation, following a leader, 
reacting to a catalyst or provocateur).  For example, a human community 
is never directly coerced to do anything.    (043)

We were able to run these statistics on thirty-thousand English language 
by automated computer processes because we had previously defined a 
library of common lexical concepts for the roots of ancient language 
that still occur in modern languages today. We believe at least 40% of 
the regularly used vocabulary in use today in English stems from words 
roots used in ancient languages as along as 3500 years ago-- maybe much 
longer.    (044)

We chose 2,750 specific root language forms for our original library or 
concept-base (something like a baseKB) and determined their 
interpretation mappings according to the proposed framework.  We had 
much fewer in the original study, about half the amount. Unlike the 
abstract structures of consonants and roots that were "found" or 
"discovered" by induction over word root interpretations, cognitive 
frames are created by people in the here and now.    (045)

As a sociological proposal, here we are modeling sociological cognition. 
Cognitive frames are experimental, empirical frames. Still, we believe 
that these cognitive experiments are not arbitrary. To begin with, a 
crucial component of the artificial cognitive frame is a rigid abstract 
structure, a naturally occurring structure: the root interpretation 
mapping. Therefore, our cognitive experiments can be studied with the 
prospect of finding "natural laws" like the precedence rules, polarity 
effects and control mechanisms which we found in the two lower-level 
structures.    (046)

To show the members of this group how the proposed framework can be used 
with inferencing engines, we will refer to root interpretation mappings 
as the elements m1, m2,..., m2750 of the library of root interpretation 
mappings M (a conceptbase equivalent to a baseKB).    (047)

  Define a cognitive frame g as a triple <u, mi, v> that consists of a 
user u
  who implements a root interpretation mapping mi in an environment v:
     g = <u, mi, v>
      where u is a user
      mi is a function frame out of M, i = 1 to 2750
      v is an environment"    (048)

We can simplify and visualize the function frames (briefly introduced 
above) for an easier discussion. We can refer to the abstract object 
pair ( s(i), r(j) ) by a verbal designation, e.g., "engaged assignment 
of containment" instead of ( s(6), r(3) ).    (049)

We can also use a self-explanatory arrow notation. For example, let us 
say that the root interpretation mapping m1 out of M for root "ssad lam 
hha". An Arabic word root meaning  'construction' (we are not inventing 
or specifying a structure here, we are taking one from sociological nature).    (050)

   m1 is f52 : X63 ==> Y81    (051)

   is simplified as
            m1 = a construction mapping =
              engaged assignment of containment
                =(assignment of manifestation)=>
              inward assignment of manifestation and containment    (052)

To explore the possible structures of cognitive frames, it makes sense 
to bootstrap by starting with cognitive frames that implement root 
interpretation mappings that deal with construction and destruction. Let 
us return to m1 and practice some cognitive interpretation.    (053)

We say that the mapping "outward assignment of manifestation" designates 
a function. We define the interpret operator from the verbal 
designations of the (process combination, polarity) pairs, or the verbal 
designations of the root interpretation mappings, to verbal designations 
of cognitive frames, thus:    (054)

   "designate function" = interpret ( "=(assignment of manifestation)=>" )    (055)

    "a construct" = interpret ( "engaged assignment of containment" )
    "complex function" = interpret ( "inward assignment of manifestation 
and containment" )    (056)

The whole construction function m1 can thus be interpreted
    interpret (m1) = designate a construct to a complex function    (057)

Let us now introduce a cognitive frame for construction    (058)

   g1 = < user, m1, environment >    (059)

We define and use the instantiate operator. The cognitive frame g1 can 
be instantiated by modeling how the user "cook" implements the 
construction function m1 by designating a construct of "cuts" and 
"bread" to the complex function "make a sandwich."    (060)

   instantiate (g1 ) = < cook, m1, {bread, cuts} > = "make a sandwich"    (061)

The expression "make a sandwich" serves as a name for one of the 
instantiations of cognitive frame g1. The user "analyst" implements m1 
to find the "subject sandwich" in a text by looking for the 
co-occurrence of the words "bread" and "cuts."    (062)

   instantiate (g1 ) = < analyst, m1, {"bread," "cuts"} > = "define 
subject sandwich"    (063)

The user "carpenter" implements m1 to make a box by nailing some wood 
together.    (064)

  instantiate (g1 ) = < carpenter, m1, {wood, nails} ) > = "make a box"    (065)

The user "doctor" implements m1 to sew a wounded person together with 
needle and thread.    (066)

  instantiate (g1 ) = < doctor, m1, {wounded, needle, thread} > = 
"stitching up a cut"    (067)

We have nearly 4000 root interpretation mappings defined now.  Most of 
those came from Arabic, others are derived from more pragmatic modeling 
considerations and are taken from studies of modern German and English 
and a little French.    (068)

We believe this framework can greatly benefit the ontology of others and 
provide a foundation for interoperability between various specifications 
with various objects, theories and inferencing apparatus.    (069)

As I do not want to add to the noise level more than necessary, let me 
close by simply saying that additional information is available on request.    (070)

Respectfully submitted,    (071)

Ken Ewell
Management Information Technologies, Inc. (MITi)
mitioke@xxxxxxxxxxxx    (072)

>-----Original Message-----
>From: ontac-forum-bounces@xxxxxxxxxxxxxx
>[mailto:ontac-forum-bounces@xxxxxxxxxxxxxx] On Behalf Of Cassidy,
>Patrick J.
>Sent: Thursday, January 12, 2006 12:59 AM
>To: ONTAC-WG General Discussion
>Subject: RE: [ontac-forum] Re: The world may fundamentally be
>inexplicable -Cyc definitions?
>In response to three of John Sowa's comments:
>(1) [John Sowa] The major problem is that there is no document that
>defines the "aim of this group":
> > [Michael Gruninger] I do not think that it fits into the aim of this
>  1.1  The "aim of this group" is defined in the charter:
>  1.2  The current status of existing ONTACWG projects was listed in
>the message at:
>      http://colab.cim3.net/forum/ontac-forum/2005-12/msg00114.html
>   You should note in both cases that the emphasis was on actual
>construction of computational resources. At present this means: the web
>site with pointers to resources and ongoing work; the effort to define
>requirements for a registry that will allow precise specification of
>relations among knowledge classifications: and the adoption of a Common
>Semantic Model (an ontology or lattice of ontologies) that will serve
>to provide the conceptual defining vocabulary for all of the ontologies
>and terminologies that are of interest to ONTACWG members.  Other
>projects may be suggested by members.
>   Although discussion of basic issues is certainly a necessary part of
>preparing for concrete effort, the emphasis has to be on actual
>building of the resources that may be useful.  Just one of the nine
>bullets in the charter (i.e. the "aim of this group") mentions
>discussion.  That is a reasonable guide: The ratio of work to
>discussion could profitably be about 8:1.  Thus far the ratio has been
>considerably lower.  At the initial phase this should not be
>surprising. I hope we can get quickly beyond this initial phase.
>  1.3 I would suggest that to keep the focus on constructive effort,
>those who wish to comment on the efforts of this group first make a
>concrete proposal regarding the structure of some computational
>artifact that can help advance the goals of this group.  If nothing
>else comes to mind, by default, I would suggest that one provide a
>specific comment on the structure or content of the merged TopLevel of
>OpenCyc, SUMO, and DOLCE (with some elements of BFO and ISO15926) which
>I placed on our site at:
>   http://colab.cim3.net/cgi-bin/wiki.pl?CosmoWG/TopLevel
>  An OWL version of that hierarchy is at:
>Specific comments on specific artifacts will help this group to learn
>what its members believe will be useful.
>  1.4  To date, among the more general comments have been the
>   1.4.1   That the COSMO should or may be a lattice of ontologies
>rather than a single logically coherent ontology
>   1.4.2   That getting wide agreement will be facilitated by having a
>small and sparsely axiomatized common ontology rather than something as
>complex as one of the existing upper ontologies.
>  As best I can tell there is little or no disagreement with 1.4.1.
>  As for 1.4.2, it is still not clear just how large or small the
>common ontology can be and still function both as a commonly accepted
>ontology and as a useful artifact that can promote interoperability.  I
>strongly suggest that we discover the answer to the unresolved size
>question by building that ontology and learning in practice which
>different views cannot be accommodated within a single logically
>coherent ontology.  From that point, we can discover what the structure
>of the lattice would have to be to include a wider range of concepts.
>(2) [John Sowa] After spending their first 5 years with interpretation
>#1, the
>Cyc project abandoned it around 1990 in favor of interpretation #3.
>I can't imagine that anybody seriously believes that the ONTAC WG
>can accomplish what Cyc failed to do in anything less than the
>21 years Cyc has already spent.
>  I believe that OpenCyc in its current form would provide an adequate
>technical basis for a Common Semantic Model that could specify meanings
>and thereby relations among the different Knowledge Classification
>Systems.  So would SUMO.  It is therefore not necessary to "accomplish
>what Cyc failed to do" - technically.  The problem at this stage is not
>technical - indeed we are not trying to solve unresolved technical
>problems, or to do anything technical that Cyc or SUMO has not already
>done.  Our problem in developing a COSMO is sociological - we are
>trying to find out what ontology or subset can be agreed on by most of
>the participants in ONTACWG as adequate for their purposes.  To do
>this, it will probably have to be a lot simpler than Cyc, whose
>complexity is daunting.  I have proposed a very simple merged hierarchy
>as the barest starting point to try to discover the points of
>agreement.  To be useful, this bare outline will have to be expanded,
>but the expansion should be done in stages that can be reviewed and
>understood by our whole community at each step.  I do not believe that
>the solution to interoperability will become clear by further
>discussion, but by building a community-satisfactory ontology  (or
>lattice) for that purpose and testing it, in increasingly complex
>reasoning tasks.
>Semantic Interoperability merely means that systems using semantic
>content be able to share their semantic content.  Our common
>representation does not have to be any more semantically rich or
>technically complex than any existing system used for inferencing.  It
>just has to be adequate to express what they express, and agreed on by
>the community that wants to interoperate.
>The problem is complex enough, please don't make it sound any more
>complex than it is.
>(3) On the question of a defining vocabulary for predicates:
>  [John Sowa - characterizing Cyc's initial goal]  1. There shall be
>*one* single consistent theory that defines *all* the predicates (or
>concept & relation types) that are permissible in any message passed
>between any two systems that claim to adhere to the ONTAC standards.
>   This presents a very interesting question, and if anyone actually
>have any data on the issue, it would be very enlightening for us.
>Every OpenCyc microtheory uses the "BaseKB" as a common foundation.  I
>am aware of some microtheories that are logically inconsistent with
>some other microtheories (not with the BaseKB), but many microtheories
>seem to be created merely for computational efficiency purposes, not
>because they are actually inconsistent with others.  We are still left
>with the question of whether any microtheories (which ones?) require
>concepts in their axiomatic definition that are not already in the
>BaseKB - either directly or by transitive closure of the definitions.
>To answer that question we would have to agree on (a) what constitutes
>a primitive not fully-axiomatized predicate; and (b) what axioms are
>actually used for the predicate definitions.   The latter is a problem,
>since the axiomatic definitions of the predicates have not been made
>public by Cyc.
>   It will be extremely interesting and informative to discover which
>of the predicates in the Cyc microtheories cannot be specified by the
>conceptual elements of the BaseKB -- i.e., which of them are truly
>primitive in the Cyc system.  If anyone has any example of a
>microtheory predicate which is known to be primitive -- not definable
>via concepts used in the BaseKB, please provide us with that
>information.  Only concrete examples of logically incompatible axioms
>will provide us with the information necessary to determine what the
>structure of a lattice of ontologies will actually be.
>  Unless we can get specific examples of concepts in the microtheories
>that are not specified using concepts in the BaseKB, we cannot know if
>Cyc BaseKB provides an example of successful use of a defining
>vocabulary, or not.  And if not, we would still need to know whether
>the Cyc designers decided not to bother creating definitions out of
>time pressure, convenience, or computational efficiency rather than
>because of some principled logical requirement.  We can anticipate that
>there will in fact be some logical inconsistencies in some context
>(microtheory) representations - whether in Cyc or any other large
>ontology - that will require creating new nodes in a lattice of
>ontologies.  But we are interested in finding the maximal potential for
>agreement, and for that we need to know which predicates in
>microtheories are (1) logically incompatible, or (2) undefinable using
>only BaseKB concepts.  If Cyc is going to be cited as exemplifying some
>principle, we need very specific details of how it illustrates that
>principle.  Saying that something can't be done because some individual
>or company didn't do it is not only a logical non-sequitur, it
>conflates multiple potential reasons for things not getting done.
>   Rather than wondering or debating just what it is that Cyc has done
>(without detailed inside information) I would think it more productive
>to do as much as we can and see how far we can get.  We don't have to
>reinvent things already created, just decide which ones of them to use.
>Patrick Cassidy
>MITRE Corporation
>260 Industrial Way
>Eatontown, NJ 07724
>Mail Stop: MNJE
>Phone: 732-578-6340
>Cell: 908-565-4053
>Fax: 732-578-6012
>Email: pcassidy@xxxxxxxxx
>-----Original Message-----
>From: ontac-forum-bounces@xxxxxxxxxxxxxx
>[mailto:ontac-forum-bounces@xxxxxxxxxxxxxx] On Behalf Of John F. Sowa
>Sent: Wednesday, January 11, 2006 1:50 AM
>To: Michael Gruninger
>Cc: 'SUO WG'; CG; ONTAC-WG General Discussion
>Subject: Re: [ontac-forum] Re: The world may fundamentally be
>The major problem is that there is no document that precisely
>defines the "aim of this group":
> > I do not think that it fits into the aim of this group.
>Following are three possible interpretations of the ONTAC goals:
>  1. There shall be *one* single consistent theory that defines
>     *all* the predicates (or concept & relation types) that are
>     permissible in any message passed between any two systems
>     that claim to adhere to the ONTAC standards.
>  2. There shall be a *family* of theories, each registered in a
>     metadata registry as specified by ISO 11179, and any message
>     passed between any two systems that adhere to the ONTAC
>     standards shall specify which theory or theories are assumed
>     to define the predicates that occur in that message.
>  3. The theories specified in #2 shall consist of one central
>     core theory that is required and a family of optional
>     theories, each of which is consistent with the central
>     core.  Any message passed between any two systems that
>     adhere to the ONTAC standards shall specify which theory
>     or theories are assumed *in addition to* the required core.
>After spending their first 5 years with interpretation #1, the
>Cyc project abandoned it around 1990 in favor of interpretation #3.
>I can't imagine that anybody seriously believes that the ONTAC WG
>can accomplish what Cyc failed to do in anything less than the
>21 years Cyc has already spent.
>Therefore, I believe that interpretation #3 is the only one we can
>seriously contemplate on any reasonable time frame (i.e., something
>considerably less than 21 years).  Given that assumption, the main
>questions to address are
>  1. How big is the central core?  What axioms shall be required?
>  2. How do we relate the optional theories to one another and
>     to the central core?
>  3. How do we handle messages passed between systems that use
>     different (and possibly inconsistent) optional theories?
>  4. And most importantly, how will legacy systems communicate
>     with the new systems -- despite the fact that *none* of them
>     can be assumed to be consistent with the axioms of the core,
>     let alone the far more detailed axioms of the other theories?
>The point that I have been trying to make for the past five years
>of the SUO project and the past 3 months of the ONTAC WG is that
>unless these questions are addressed, there is no hope that any
>of this effort will be of any value whatever to the people who
>have day jobs working for managers who expect results.
> > ... we should at least agree on the language for specifying the
> > ontologies.  Common Logic seems a reasonable candidate for this
> > purpose.
>I'm happy with that choice, but I also realize that the overwhelming
>majority of the developers who will be expected to use the ONTAC WG
>ontology are far more familiar with UML than with any CL dialect.
>Even RDF and OWL are unfamiliar to most of them.
> > I was asking for a set of axioms in the language of FOL whose
> > models are isomorphic to the model theory of pi calculus.
>You can represent pi calculus in FOL.  Some axiomatizations may
>use more exotic languages, but they're not required.
>In any case, I don't expect the programmers to master pi calculus.
>But there are very widely used subsets of pi calculus that have
>been used for years -- examples include PERT charts, UML activity
>diagrams, Petri nets, etc.  A major reason why the business
>process modeling people have adopted pi calculus is that it is
>the natural next step beyond what most programmers already know
>-- namely PERT charts and UML activity diagrams (both of which
>can easily be axiomatized in Common Logic).
> > But that's sort of my point -- how can pi calculus be a module
> > if it is not specified as a set of axioms in the same language
> > as every other module?
>I assume that there will be a *family* of theories of various levels
>of detail, all of which can be axiomatized in some dialect of CL.
>Translating PERT charts and UML activity diagrams into CL is trivial
>compared to the task of teaching programmers PSL and situation
> > ... There are many queries about processes that do not require
> > explicitly using time, and you may want to answer queries about
> > time without committing to all assumptions of a particular
> > process ontology.  If these two are too tightly coupled, you
> > lose modularity, sharability and reusability.
>You are preaching to the choir.  I have been trying to make the
>point that the number of axioms required in the core should be
>very small.  In fact, the only consensus view about time is that
>any notation should be translatable to UTC format.
>I would put all methods for reasoning about interactions between
>time and processes into the optional modules -- that includes PSL,
>PERT charts (as translated to some dialect of CL) and many other
>versions that assume pi calculus or situation calculus or whatever.
> > Again, how can different modules be compared if they are written
> > in different languages with different model theories.
>Excellent question.  First, we could assume that all modules are
>written in some dialect of Common Logic.  Then we proceed as follows:
>    http://www.jfsowa.com/logic/theories.htm
>    Theories, Models, Reasoning, Language, and Truth
> > ... but I thought that our objective here was to build some common
> > ontologies, not reproduce the semantic integration problem.
>If a common ontology would magically solve all the problems, we could
>just recommend Cyc and declare a victory.  Cyc has been trying to do
>that for 21 years without having anything that can pay the rent --
>they still depend on government handouts for their continued existence.
>Unless we can provide a migration path for legacy systems, we have
>done *nothing* that anybody would actually use.  People are not going
>to flip a switch tomorrow and magically have trillions of dollars of
>software that conforms to whatever axioms the ONTAC WG proposes.
> > Given two first-order ontologies that intuitively cover the same
> > concepts, you can try to find a weaker common theory such that both
> > of the initial ontologies are consistent extension of the common
> > theory.
>Yes, that is the recommendation of the theories.htm paper.
> > How do you hope to do this between pi calculus and any other
> > first-order ontology of process?
>Some theories, such as PERT charts and Petri nets, are already
>subsets of pi calculus.  Others may have more complex mappings,
>and the only common assumption might be that all times can be
>translated to UTC format -- but that's not necessarily bad.
>For 99.97% of the databases in the world, the times recorded
>in the DB do not include any information about how those times
>were derived.  That means, we can assume that messages that
>assert event E happened at time T can be passed among systems
>with no assumptions beyond the common core (i.e., that the
>times can be translated to UTC format).
>Of course, more can and should be done, but version 1.0 must
>be able to accommodate legacy systems before we can add new
>features that really take advantage of the ontology.
>Message Archives: http://colab.cim3.net/forum/ontac-forum/
>To Post: mailto:ontac-forum@xxxxxxxxxxxxxx
>Shared Files: http://colab.cim3.net/file/work/SICoP/ontac/
>Community Wiki:
>Message Archives: http://colab.cim3.net/forum/ontac-forum/
>To Post: mailto:ontac-forum@xxxxxxxxxxxxxx
>Shared Files: http://colab.cim3.net/file/work/SICoP/ontac/
>Community Wiki:
>Message Archives: http://colab.cim3.net/forum/ontac-forum/
>To Post: mailto:ontac-forum@xxxxxxxxxxxxxx
>Shared Files: http://colab.cim3.net/file/work/SICoP/ontac/
>Community Wiki: 
>End of ontac-forum Digest, Vol 9, Issue 15
>    (073)

Message Archives: http://colab.cim3.net/forum/ontac-forum/
To Post: mailto:ontac-forum@xxxxxxxxxxxxxx
Shared Files: http://colab.cim3.net/file/work/SICoP/ontac/
Community Wiki: 
http://colab.cim3.net/cgi-bin/wiki.pl?SICoP/OntologyTaxonomyCoordinatingWG    (074)
<Prev in Thread] Current Thread [Next in Thread>
  • [ontac-forum] turning to a sociological framework, Ken Ewell <=