John,
I (also) like the idea of neutrality, it is probably essential both
technically and politically. One approach to neutrality is to omit anything
controversial. I'm wondering if there is another. (01)
Similar to your suggestion of modularity, perhaps the common framework is
its self modular in such a way that conflict is explicitly allowed for
between its modules. There is a tendency to think of these systems
monolithic statements of Truth, and while some have had that intent, it may
not be necessary for our purposes. If we instead think of each of these
ontological modules as "speech act"(s), an assertion by an individual, group
or authority at a particular time. We can have a framework for dealing with
a system of these modules that may or may not be in conflict. (02)
The speech act asserting a set of statements forms one dimension of context
for those statements. A set of statements is valid within a context. There
are, of course, other forms of context such as authorities/political,
physical or situational. All can govern how and when statements within that
context are to be treated. (03)
Given such contextual modules you can determine what modules are in conflict
with others, and perhaps provide logic to reduce or eliminate the root
causes. (For example, the assumption that time is the same for all
participants - which works just fine in the context of earth systems but not
in the context of space flight). Almost all human abstractions seem to be
highly contextual, yet most logics don't have the mechanisms to deal with it
due to the monotonic restrictions. Note that the recently adopted standard
"Semantics of business vocabulary and business rules"
(http://www.omg.org/docs/bei/05-08-01.pdf) had to explicitly go beyond FOL
for some of the same reasons. (04)
For a given purpose you can then decide the context that applies and apply
reasoning to the extent that those context are not in conflict. You can
also analyze the extent of agreement or conflict between context. (05)
Another example of the speech act with reference to models (some of which
may be Ontologies) is with respect to intent. Is it the intent of a model
to represent what exists, something that shall exist, may exist, etc?
Without such a speech act we just have a bunch of statements without a
purpose. (06)
One of the goals for this is to have a wider net for capturing knowledge,
much of which is expressed in ways that are imprecise, lacking in detail and
contradictory. Some are just wrong. By capturing that knowledge as
contextual speech acts rather than "Truth" we allow for the realities of
human expression. (07)
While I do have some ideas about how to model context, I am not sure how to
fit it into the formal theories (Breaking each context into an ontology
doesn't seem to work as it is too structured). Category theory and IFF
(Which I don't fully understand) seem to start to give us a way to talk
about the relationships between context. But, I'm not sure we even need
that complexity at this point if we can create our set of concepts within a
contextual framework, it becomes a separate problem to figure out how to
deal with those statements with various formal systems. (08)
So, in summary, can we find a way to "admit all" instead of least-common
denominator by applying context to statements as speech acts from various
communities or authorities? (09)
-Cory Casanave (010)
-----Original Message-----
From: ontac-forum-bounces@xxxxxxxxxxxxxx
[mailto:ontac-forum-bounces@xxxxxxxxxxxxxx] On Behalf Of John F. Sowa
Sent: Sunday, November 27, 2005 10:01 AM
To: ONTAC-WG General Discussion
Subject: [ontac-forum] Neutrality Principle (011)
In developing a unified framework, we need to get all
the major players in the ontology field to work together
right from the beginning. (012)
Since all the major systems are currently incompatible
with one another, that requirement imposes constraints
on what is possible. Therefore, I propose the following
*neutrality principle*: (013)
The unified framework UF should be neutral with respect
to all the major ontology projects that are currently
under development. That implies: (014)
1. Every system X that participates in the effort
should support import and export operators for
importing all of UF or any subset of UF to and
from X. (015)
2. UF should not contain any categories or relations
that would create an inconsistency with any major
system X; i.e., it should be possible to import
*all* of UF into X without causing an inconsistency. (016)
3. Importing UF into any system X and then exporting
it from X should result in a version UF' that is
logically equivalent to the original UF except for
possible cosmetic changes in the formatting. Those
changes should not cause any other system Y that
imported UF' to generate inferences that differed
from the inferences generated directly from UF. (017)
4. Points #2 and #3 imply that the initial version of UF
should avoid having a complex or detailed upper level,
since most of the inconsistencies between any two
ontologies result from problems at the top. It also
implies that the system should contain a minimal
number of relations whose definitions are not overly
restrictive; i.e., it is better to have *too few*
axioms than too many, since the more axioms there
are, the more conflicts arise. (018)
5. Point #3 implies that the emphasis of the UF should
not be on rich inference capabilities, since those are
usually highly context dependent and very likely to
lead to inconsistencies. Therefore UF would be better
suited to interchange and communication than to extended
inference or problem solving. The extended inferences
would be done by more specialized systems, which could
add additional axioms of their own and use either
logic-based methods or computational techniques. (019)
6. UF should avoid features that limit its use to any
particular notation or system of inference. OWL,
for example, could be used to represent all of UF,
but UF should not have any dependencies on any features
of OWL -- either in logic or in formatting -- that are
not available in all major systems of ontology. (020)
The details of these points are negotiable, but the fundamental
principle of neutrality should be that UF shall be based on the
minimal subset of features that do not create inconsistencies
with any major ontology. (021)
To avoid slighting anybody, I'll avoid listing what ontologies
should be considered "major". (022)
John Sowa (023)
_________________________________________________________________
Message Archives: http://colab.cim3.net/forum/ontac-forum/
To Post: mailto:ontac-forum@xxxxxxxxxxxxxx
Subscribe/Unsubscribe/Config:
http://colab.cim3.net/mailman/listinfo/ontac-forum/
Shared Files: http://colab.cim3.net/file/work/SICoP/ontac/
Community Wiki:
http://colab.cim3.net/cgi-bin/wiki.pl?SICoP/OntologyTaxonomyCoordinatingWG (024)
_________________________________________________________________
Message Archives: http://colab.cim3.net/forum/ontac-forum/
To Post: mailto:ontac-forum@xxxxxxxxxxxxxx
Subscribe/Unsubscribe/Config:
http://colab.cim3.net/mailman/listinfo/ontac-forum/
Shared Files: http://colab.cim3.net/file/work/SICoP/ontac/
Community Wiki:
http://colab.cim3.net/cgi-bin/wiki.pl?SICoP/OntologyTaxonomyCoordinatingWG (025)
|