John and
Adam,
Please see my commentary
on your comments and concerns.
Regards,
Azamat Abdoullaev, EIS
Encyclopedic Intelligent Systems Ltd
John Sowa wrote:
<RDF and OWL are too limited, clumsy, and
inefficient to support any serious work in knowledge representation and
reasoning.>
<My complaint about RDF and OWL is that they are
terrible languages for all three categories of humans -- #1, #2, and #3
-- and they are also horribly inefficient for computers. They do
not have a target audience.>
Adam Saltiel responded:
<I understand that things can go in circles in the AI world, as just
implicitly mentioned in John's post>
<But if the tools are for the "wrong" language this simply isn't good
enough, is it? This in turn will have implications for funding efforts and hopes
of success for different projects, so I think the issues should be considered
very seriously.>
AA:
Indeed, the matter looks serious, both from the public and
scientific sides, beside the technical issues which Andrian tries to point out
for a long while. The first issue is concerned with getting huge public
funds, promising a sort of magic technology as the Knowledge Society
intellectual technologies, without making foundational ontological
groundwork, like as SUO or ONTAC or USECS. The
price of ignoring the unifying ontology framework for building advanced
information systems may amount to many and many millions considering
that the European Union initiated multi-billion R&D projects in Information
Society Technologies, seemingly somehow to catch up with the like IT programs in
the USA. In order to lay down the knowledge
infrastructures of the upcoming Information Society the EU’s Research Council
and the European Parliament allocated 3.8 billion Euro for Knowledge
Technologies within the 6th European Union Framework Programme (FP6) for Research and Technological Development, with a total
budget of 17.5 billion Euro.
Within the FP6
Programme, all the web-based knowledge technology projects are largely concerned
with ontology research, design, learning, and management. To meet the high
social and political expectations, the so-called 'networks of excellences'
have been forming. So, the Knowledge Web ‘network of excellence’ is engaged to
transfer ontology technology from universities to industry; the Data Information
and Process Integration group is contracted to contribute to the infrastructure
of semantic web services; while the Semantic
Knowledge
Technologies
network is signed to produce ontological software and tools for semantic web
services. From the scientific side, what is mostly worrisome
is a misinterpretation of the whole matter of SW ontology, apt to
result in the public distrust in information and computing science,
as it occurrs with another noisy scientific enterprise,
the cloning research projects. It is not a deep arcanum that
trying to build a [web resource and data] unifying representation
language just basing on formal logic tools without ontological foundation
and semiotic fundament is misguided. Nevertheless, most EU's semantic web projects
are performed under the costly collective delusion that ontology-based semantic
web and services technologies can be constructed without having a unified
framework ontology providing an integrated representation and reasoning
platform. One should not be a visionary to call the outcome: the cost
of such an academic head game may be the budget allocated, thus
repeating the common error of confusing the public funds spending with the
advanced information technology delivery.
Also, i have to agree with John Sowa what is going on in the SW
research projects makes many of us have a sort of AI deja vu.
Since the malady of misconceiving the value of Form (logic) and
Content (ontology) afflicted the Knowledge Representation area
surfaced again, regardless of its being the main cause of the classical AI
paradigm fatality. As a fresh lesson of this harmful condition, it is usefull to
remind the Vulcan project Halo failing to meet the loudly declared hopes and
promises of creating the Digital Aristotle and now silently passing
away.
All this takes place despite of the fact that the lessons ('the best
practices') acquired from the AI R&D are mostly evident and
instructive. Even such an infuential AI policy
maker as Randall Davis, a former president of AAAI, emphasized
that ''a KR is a set of ontological commitments'' on <how to view and reason
about the world>. The logically-minded researchers were forewarned
of the touble that the formal logic languages can bring in: 1.
''Ontologies can of course be written down in a wide variety of languages and
notations (e.g., logic, LISP, etc.); the essential information is not the form
of that language but the content, i.e., the set of concepts offered as a way of
thinking about the world. Simply put, the important part is notions like
connections and components, not whether we choose to write them as predicates or
LISP constructs.''; 2. ''... all the representation technologies...supply
only a first order guess about how to see the world: they offer a way of seeing
but don't indicate how to instantiate that view. As frames suggest
prototypes and taxonomies but do not tell us which things to select as
prototypes, rules suggest thinking in terms of plausible inferences, but
don't tell us which plausible inferences to attend to. Similarly logic tells us
to view the world in terms of individuals and relations, but does not
specify which individuals and relations to use''.[see What is a Knowledge
Representation? AI Magazine, 1993]:
Bottom line:
We need to comprehend the meaning and
relationships of real ontology and formal logic,
their similarities and differences. For both the universal sciences
embrace all things, but only from diverse perspectives. Ontology considers the
being of everything which exists in the world, material or mental or social, all
basic aspects, properties, relationship patterns and uniformities of
reality, cutting the body of all things along its joints. By contrary, (formal)
logic deals with the formal parts and elements of human
knowledge and reasoning, cutting the forms of the universe of
discourse from its matter and content. As a consequence, the ontology/logic distinction makes all the
difference in building a new class of Knowledge Society intellectual
technologies, like the semantic web.
Though relating to anything, Logic (as
a formal science) is only about the formal conceptual elements and patterns
(terms, predicates, propositions and inferential rules) of the discourse about
anything, all taken without any reference to reality. Whereas
Ontology is about assigning a real significance
(meaning) both to the formal logical constructions, linguistic
expressions and communicative acts, within a single framework of
fundamental entity classes and relationships applicable to any knowledge domains
and sciences. Not to see this cardinal 'division of knowledge power' of real
ontology and formal logic with their inherent interplay may be harmful for the
whole cause of advanced knowledge systems and reasoning
applications.
----- Original Message -----
Sent: Sunday, March 26, 2006 11:55
PM
Subject: Re: Interpretation of RDF
reification
I find this very interesting, but also a bit worrying. i. I
find it incredibly interesting because, many years ago (twenty, perhaps) I
shared my flat with someone who was studying Grice. I was interested enough to
make sure I had a copy of every paper he had written, some are a bit obscure.
My friend pointed out at the time that he was an off the beaten track figure,
and so it seems he remained, at least until recently. I don't mean to imply
this is where I think his work should be, far from it. I know how ambiguous
email communication can be, but I have always been intrigued by his work. I
thought there wasn't a formalism capable of capturing his ideas sufficiently
for machine exchanges. ii. I have read, and I think treasured, Knowledge
Representation. And, indeed, there is a bibliographical reference to Grice in
it. iii. But not finding Grice mentioned in the SemWeb efforts I had
assumed it was either irrelevant or subsumed in this effort. If not impossible
to incorporate then it was more of the former. iv. I understand that things
can go in circles in the AI world, as just implicitly mentioned in John's
post, but people in the Uni Department I used to work in (Greenwich
University) said much the same. I suppose that basically a good idea may not
have been fully fleshed out and different implementations have implications
for its viability. v. This is the worrying bit. John has said that
RDF
and OWL are too limited, clumsy, and inefficient to support any serious
work in knowledge representation and reasoning.
I don't know
if this is true. I don't know what constitutes "serious work", although John
hints that large models make Protege choke, and I assume it is the reasoning
chains in the model rather than the number of elements per se, that are the
choke points. I don't know what constitutes "serious work" because use
cases are so thin on the ground. Now, to expand on my concerns, where there
is a well articulated use case (in the sense of how to use it, not whether it
will be used, more on that in a bit) that of work flow modelling, there is an
example in a well funded EU project working in this area, WSMO, but the
project uses another specialised language that OWL can be translated into, not
OWL. This may be important in that I had thought that the reason for RDF
and OWL was not so much to achieve what couldn't be achieved by other means,
so much as to achieve this in an open language where what would be key is the
extent of that adoption of this language. Just before I go further, perhaps
there are some flawed assumptions here. Once again, I had thought that the
analogy might be with the rise of Java. I thought it was seen that although
Smalltalk is like Java, the desirable outcome would be to follow a similar
trajectory to Java. One might say that Smalltalk is "better" than Java, but
failed to gain acceptance due to its marketing. Although obviously not Open
Source, Java was made completely (or sufficiently) available, to gain wide
adoption. So, to gain acceptance, open source transparency and availability is
a good thing. But this does presuppose that the language in question will cut
the mustard otherwise. (I know there are other precedents, in particular and
notably the Apache Web Server, but you get the idea.) So, perhaps the
flawed assumption is that there should be just one language or language set
that does for the SemWeb? vi. However, judging from the level of activity
on the mailing lists for WSMO related issues, this, at least has a long way to
go before any sort of wide acceptance. I am not sure that this is because of
the limitations of OWL (or its own variant). Nor, even, the obvious fact that
a further language fragments the potential user group. This may not apply here
anyway. What I think is happening is that the technology has not shown itself
to be sufficiently compelling as yet. v. So how does a technology prove
itself in this way? There is a tension between what can be demonstrated and
what potential users are prepared to contemplate by way of adoption. This is
complicated by a number of things. What are we trying to do here? It can't be
to just promote a single language type solution but rather the ability to do
various sorts of reasoning across disparate data pools, to determine the
preferred design for those pools, to cope with non-conformant pools and to
offer an open means of achieving these ends (this list isn't intended to be
comprehensive). But the thought remains that there should be a single
language for this since this reduces duplication of effort. Again, a
compelling application might be persuasive, but then that application would
have to be used to be compelling, that is have a real user base. And there we
are looking at another area of complication. For instance the database that
Dan wishes for has been implemented in several forms for RDF. These are
already compelling applications, although not enough to make a semantic
application, as the schema, data and queries are also required. So, in sum,
have I narrowed it down? Is the issue that were a more expressive language
used, there would be more in the user community at work on schema, data and
queries utilising that language? Is there a particular application that
would show the difference between the two languages and prove a compelling
case for would be adopters? Is there sufficient regard paid to a
distinction between different types of possible SemWeb applications? I have
mentioned one and several others are mentioned on this list as well as in this
thread. In particular there seems to be a sharp distinction between, say, a
desktop application that relies on markup to decorate underlying content and
P2P to discover info nuggets and building a large and comprehensive ontology
in the field of medical discovery. Or again, as i mentioned, an ontology of
process that comprehensively handles workflow. Why should it be the same
language in all cases? Is it just to do with a paucity of alternative tools
and the desire not to duplicate effort? But if the tools are for the
"wrong" language this simply isn't good enough, is it? This in turn will
have implications for funding efforts and hopes of success for different
projects, so I think the issues should be considered very
seriously. Sincerely, Adam Saltiel
|