Hi Ed --
Yes of course, for present purposes, the strength of FOL, and of
Executable English, is in the area of bringing together diverse
conceptualizations and making them interoperate.
Interoperation here means machine-to-machine, and in the case of EE, human-to-machine and, most importantly, also human-to-human
at the nontechnical level.
Any practical reasoning system provides hooks to "call" numeric and
other software tools (like the ones you are working on) as part of the
deduction process. (EE provides this, plus automatic SQL
generation and execution). So, it becomes a design call to
figure out what should be done deductively (or abductively) and what
should be handed over to, say, a large matrix inversion package.
What the FOL or EE package provides is the "glue" to make many such
things, both numeric and non-numeric, interoperate.
That said, the "calls" had better be to proven software, since the
business level English explanations that EE provides stop at the "call"
interface, and merely say e.g. "that's what the matrix inversion
package said".
Moreover, there are sometimes advantages in carrying part of a numeric
computation along in EE, and good deduction engines plus ever faster
hardware are making this easier.
For example, we tested several Black-Scholes risk analysis sites on the
Web, and observed that they each found different answers from the same
inputs! Since the programmer's decisions are hidden in the
code on those sites, it's hard to figure out what was done. On
the other hand, in [1], Black-Scholes is written in EE with
minimal "calls". When you run it, you can get an English
explanation of how it got its answers.
Cheers, -- Adrian
[1] www.reengineeringllc.com/demo_agents/BlackScholes1.agent
Internet Business Logic (R)
Executable open vocabulary English
Online at www.reengineeringllc.com
Shared use is free
Adrian Walker
Reengineering
Phone: USA 860 830 2085
On 11/20/06, Measure, Ed (Civ, ARL/CISD) <emeasure@xxxxxxxxxxxx> wrote:
Adrian,
I'm not dissing EE, I think it may be great for some
simple applications. It's less clear
to me how powerful it is for some more complex problems. Let me
mention tasks with which I am slightly
familiar.
Given a weather model forecast and GIS
terrain data, plan an operational mission exploiting your knowledge of terrain
and weather. Computerized systems make possible, but not easy, taking
detailed account of the effect on components of your system. Translating
the raw terrain and weather data into plan relevant effects requires a lot of
information and processing, much of it not likely most neatly expressed in
EE. In particular, how do you get such a language to do mathematical or
geometric reasoning? We, and others, are building systems to do
such tasks. I would hope that CDSI
will be able to incorporate, use, or at least permit such
tools.
You say: perhaps it's worth spending some time evaluating one of its main
representations -- English -- for
CDSI?
I'm sure it is worthwhile to
evaluate the uses of EE, but the brain uses a whole lot of data representation
schemes for specialized purposes, most of them not readily expressible in purely
linguistic terms. These data representations and their associated data
processing do most of the heavy lifting of analysis, planning and acting well
below the level of consciousness. Common schemes include maps,
models and networks. EE as a tool seems like a good idea - but as a
one-size-fits-all, it's just too small.
One familiar example: Humans, with a little practice, can
throw a rock or ball at a target fairly accurately up to 100 feet or
more. How do you express the visual, kinesthetic, and other
elements that go into planning such an action in something like EE?
Language was developed for interhuman communication, and its very good at
that. Human to machine, machine to human, and machine to machine
communication require some additional semantic
infrastructure.
Ed
Hi Ed --
You wrote...
The most successful model we have for
interoperability is the brain, and it features a rich mixture of top-down and bottom-up
approaches to information integration. The biggest virtue I
see to W3C, aside from incorporating that same idea, is that these guys are
actually trying to solve real world problems, and have a proven track
record. Trying to create a working system out of Executable English (or
any other version of first order logic), or any logic, topos theory, or
whatever, might be amusing but is unlikely to solve any real world
problems anytime soon.
The W3C Semantic Web work is certainly
intriguing at the Research level, but what real world problems have
been solved using it please?
Since the brain is "the most
successful model we have for interoperability", perhaps it's worth spending some
time evaluating one of its main representations -- English -- for
CDSI?
In particular, evaluating Executable English that is mapped
automatically to and from logic for inferencing over networked
databases?
Some small examples are [1,2].
Something on the
way to a solution to a real world problem is described in [3].
You can
view, run and change [1,2] and the example described in [3] by pointing a
browser to the same site.
How does this sound
?
Cheers, -- Adrian
[1] www.reengineeringllc.com/demo_agents/OntologyInterop2.agent
[2]
www.reengineeringllc.com/demo_agents/SemanticResolution1.agent
[3]
www.reengineeringllc.com/Oil_Industry_Supply_Chain_by_Kowalski_and_Walker.pdf
Internet Business Logic (R) Executable open vocabulary English Online
at www.reengineeringllc.com
Shared use is free
Adrian Walker Reengineering Phone: USA 860 830
2085
On 11/20/06, Measure, Ed
(Civ, ARL/CISD) <emeasure@xxxxxxxxxxxx> wrote:
Brad,
I
agree with your comment on pure top-down approaches, mainly because nobody
has ever managed to solve such a problem in a complex domain, much less in
the hyper-complex domain under consideration. Your alternative,
unfortunately, seems rather worse.
Brad: "But mainly because
people just don't solve ontology differences that way in the real (non-IT)
world. They just buy a dictionary, or hire a translator. Problem
solved."
That's totally unsuitable for the tempo of military
operations, and probably for almost anything
else. Interoperability has been a problem for the military at
least since Alexander, and probably much longer. The traditional
(pre-computer) solutions have been standardization, regimentation, and a
rigidly hierarchical data flow structure. I assume that the
reason that there is a CDSI is the demonstrated failures of the
traditional structure in modern operations.
The most successful
model we have for interoperability is the brain, and it features a rich
mixture of top-down and bottom-up approaches to information
integration. The biggest virtue I see to W3C, aside from
incorporating that same idea, is that these guys are actually trying
to solve real world problems, and have a proven track
record. Trying to create a working system out of Executable
English (or any other version of first order logic), or any logic, topos
theory, or whatever, might be amusing but is unlikely to solve any real
world problems anytime soon.
If interoperability is a real goal, one
needs to look at real world interoperability examples, and see where they
fail. I don't think we can neglect the institutional problems
stemming from a large number of competing groups each trying to create
their own interoperable architectures, often for their own proprietary or
institutional reasons. W3C may not have thee answer to this problem, but
it does have an approach, and a philosophy. The approach
incorporates two simple ideas: a flexible hierarchy of protocols and
self-description for both data and
protocol.
Ed
-----Original Message----- From: cuo-wg-bounces@xxxxxxxxxxxxxx
[mailto:cuo-wg-bounces@xxxxxxxxxxxxxx
] On Behalf Of Brad Cox, Ph.D. Sent: Monday, November 20, 2006 1:38
PM To: rick@xxxxxxxxxxxxxx; common
upper ontology working group Subject: Re: [cuo-wg] White
Paper
Thanks for the encouraging note, Richard. I'd backed off,
convinced I'd wasn't being heard. But buoyed by your note, I'll take one
more shot at explaining what I've been trying to get across.
One of
the things that's confusing me is I don't feel I understand what people
mean by the term "N^2 problem". I'm guessing that's shorthand for costs
increaases as limitOf(N*(N-1)) as N -> infinity = N^2. Fair enough; its
shorter.
But that applies if all N machines are to be connected to all
N-1 others. Actually cost increases as the number of *interfaces*. N^2
is just an upper bound on that. But why concentrate on the upper bound
when interfaces could be counted as easily, without the concern over
whether upper bounds are realistic? For example, for N machines in a
linear pipeline, the number of interfaces is N-1, hardly N^2 or even
N*(N-1).
So rephrasing the problem as one of semantic interoperability
between M interfaces where M is larger than we might like but still far
less than N*(N-1). I been trying to point out that there are two ways
of approaching that problem. I've called them the designed approach and
the evolved approach.
In the designed approach, a (small) community
of experts uses high technology ontology tools to build a generalized
solution (upper ontology) that can generate the mappings needed to make any
given interface interoperable. The approach doesn't much depend on
what standard (language) is used. I used OWL as my example because
that's what I'm most familiar with. Structured English, structured french,
or plain ol' Java/Cobol/Haskel would do about as well, albeit with varying
readibility. What's important here is that the approach is
centrally planned, largely confined to an expert community, although
hopefully with at least some support by domain experts with conflicting
demands on their time.
The evolved approach is entirely different
and more bottom-up. M interfaces imply there are M groups of
individuals that care about making each specific interface (call it M(i))
interoperate. Those M groups are empowered (governance?) to address the
problem in much the same way we solve inteoperability with natural
languages; by using dictionaries and related tools, using interpreters,
etc. Dictionaries and interpreters are evolved systems. Externally these
are commercial products that compete with each other in a competive system
(free markets). But I could well imagine that domain experts within govt
might produce translation dictionaries that might compete in a similar way,
if govt could find a way to incentive them to focus on the problem over
other pressing uses of their time.
Point is, I could well see how
the second (evolved) approach could "solve" the interoperability
problem" as I've stated it. I've much less confidence (approaching zero)
that the designed approach (as I defined it) ever could. This is
partially because AI technology just isn't very smart, and partially
because you still need domain experts and don't have a way to incentive
them to contribute, since you've counted too heavily on high technology as
the sole solution.
But mainly because people just don't solve ontology
differences that way in the real (non-IT) world. They just buy a
dictionary, or hire a translator. Problem solved.
-- Work:
Brad Cox, Ph.D; Binary Group; Mail bcox@xxxxxxxxxxxxxxx Home: 703 361
4751; Chat brdjcx@aim; Web http://virtualschool.edu
----------
Original Message ----------- From: richard murphy <rick@xxxxxxxxxxxxxx> To:
cuo-wg@xxxxxxxxxxxxxx Sent: Mon,
20 Nov 2006 13:03:24 -0500 Subject: Re: [cuo-wg] White Paper
> Hi
Pat, Brad & All: > > I missed the beginning of this
conversation, but couldn't resist an > opportunity to jump into the mix.
There's some common ground here > between Pat and Brad, but the
conclusions we draw from the postings > below are
significant. > > Pat: I don't interpret your posting to mean we
should all use OWL. > Please correct me if I'm wrong, but I think you're
just saying > standardization within a specific language provides for
convention, so
> we can parse, validate and reason. > >
Would you agree Brad's evolved system implies more than one language >
and interpretation across languages? > > Brad: Your point
regarding evolved sytems is an important one that > should be fully
explored in the context of CUO-WG. I'd suggest the > language of complex
adaptive systems provides for rich conversation in
> the context of
CUO: evolution, adaptation, surviveable, fitness, > generative, are all
great discussion points typically missing here ... > > Scanning
your prior postings, I'm more inclined to believe there's > progress at
hand regarding automation and the human element in system > evolution is
not one of intervention, but overcoming a knowledge > barrier in the
philosophy and logic from which our systems are designed. > >
PH> Hey, hold on. The point of OWL is to provide a > >
standard for ontology information exchange, not a >
centrally > planned ontological technology. There > is no
standard, centrally > planned OWL reasoner or > OWL tool
kit: indeed, they should evolve in
> just > the way
you describe. But without having a common > language > to
communicate in, the evolutionary > process can't even get
started.
> Just as you can't > have much of a free
market if there is no > > standard
currency. > > BC >The point is designed systems and evolved
systems arise from two > ways of >solving similar
problems. Evolved systems ("free markets" > for example)
often >solve the hardest ones better, as the soviet >
economy's shot at central >planning
shows. > >It was also an evolved problem that has pretty
much solved the N^2 > problem of >cross-domain
ontologies between diverse real languages, > a problem I
think we >do agree won't be solved by centrally planned >
technologies like OWL. > (01) > >
-- > Best wishes, > > Rick > >
email: rick@xxxxxxxxxxxxxx >
web: http://www.rickmurphy.org >
cell:
703-201-9129 > > _________________________________________________________________ >
Message Archives: http://colab.cim3.net/forum/cuo-wg/ >
Subscribe/Unsubscribe/Config: > http://colab.cim3.net/mailman/listinfo/cuo-wg/
> To Post: mailto:cuo-wg@xxxxxxxxxxxxxx >
Community Portal: http://colab.cim3.net/
Shared Files: > http://colab.cim3.net/file/work/SICoP/cuo-wg/ >
Community Wiki: http://colab.cim3.net/cgi-bin/wiki.pl?SICoP/CommonUpperOntologyWG
------- End of Original Message
-------
_________________________________________________________________ Message
Archives: http://colab.cim3.net/forum/cuo-wg/
Subscribe/Unsubscribe/Config: http://colab.cim3.net/mailman/listinfo/cuo-wg/
To
Post: mailto:cuo-wg@xxxxxxxxxxxxxx
Community Portal: http://colab.cim3.net/ Shared Files:
http://colab.cim3.net/file/work/SICoP/cuo-wg/ Community
Wiki: http://colab.cim3.net/cgi-bin/wiki.pl?SICoP/CommonUpperOntologyWG
_________________________________________________________________
Message Archives: http://colab.cim3.net/forum/cuo-wg/ Subscribe/Unsubscribe/Config:
http://colab.cim3.net/mailman/listinfo/cuo-wg/
To Post: mailto:cuo-wg@xxxxxxxxxxxxxx Community
Portal: http://colab.cim3.net/ Shared
Files: http://colab.cim3.net/file/work/SICoP/cuo-wg/ Community
Wiki: http://colab.cim3.net/cgi-bin/wiki.pl?SICoP/CommonUpperOntologyWG
_________________________________________________________________ Message Archives:
http://colab.cim3.net/forum/cuo-wg/ Subscribe/Unsubscribe/Config: http://colab.cim3.net/mailman/listinfo/cuo-wg/
To Post: mailto:cuo-wg@xxxxxxxxxxxxxx Community Portal:
http://colab.cim3.net/ Shared Files: http://colab.cim3.net/file/work/SICoP/cuo-wg/
Community Wiki: http://colab.cim3.net/cgi-bin/wiki.pl?SICoP/CommonUpperOntologyWG
_________________________________________________________________
Message Archives: http://colab.cim3.net/forum/cuo-wg/
Subscribe/Unsubscribe/Config: http://colab.cim3.net/mailman/listinfo/cuo-wg/
To Post: mailto:cuo-wg@xxxxxxxxxxxxxx
Community Portal: http://colab.cim3.net/
Shared Files: http://colab.cim3.net/file/work/SICoP/cuo-wg/
Community Wiki:
http://colab.cim3.net/cgi-bin/wiki.pl?SICoP/CommonUpperOntologyWG (01)
|