John,
The history of accommodating legacy systems is pretty much as you describe it.
I think that it has been politically easier to talk about building semantically rich models
for the new systems and finesse the legacy issues. For example,
one of the current “weak semantics” approaches to try to address this problem
is enterprise architectures (EA) . At a high level a typical EA method might :
1. study the current implementations focusing on key interfaces,
2. create an AS-IS model of applications and interfaces,
3. build “enterprise-level” information standards and
4. target new systems with these standards working out the
interfaces between old and new systems in the process
(transition planning).
There are many issues with this “enterprise” approach. One is that we don’t take on
the real problem of understanding the differing semantics of the various
applications/systems that make up the legacy systems. We typically do have a
number of interfaces that work across a number of systems. But these may not be
well modeled (as I believe you pointed out in the early 90s in an article with Zachman) .
If we did create pretty good semantic models of what we currently use we might find
ways or unifying portions of this and this in turn might provide a basis for the new,
transitional systems that have to interoperate with other new systems and important
portions of the legacy systems. It could also work the other way, which is what most
EA efforts seem to expect, but we canlearn enterprise semantics from what is
already working in the enterprise as well as from a new start on enterprise system semantics
through EA planning. So one idea that we work intelligently (for effectiveness and efficiency)
on ways to improve the semantics of curent EA “information models” – both legacy and new.
Current enterprise architectures are quite weak in this regard and this.
Gary Berg-Cross
Dear Matthew and Gary, (01)
I want to emphasize the point that I am not against the
idea of having shared ontologies, shared vocabularies,
and shared upper levels with detailed axioms. (02)
But the most important point to recognize is that we
are not operating in a vacuum. Even when people haven't
been using the O-word, they have been making ontological
commitments in every program and database system they
have developed over the past 50 years. (03)
We cannot assume that any new ontology is magically going
to replace those time-honored commitments. Instead, it
will be just one more incompatible ontology that will
threaten to create more problems than solutions. (04)
So the #1 problem to be addressed is how any new ontology
can interoperate with *all* the existing ontologies,
implicit and explicit. The implicit ones are the most
challenging because their assumptions aren't documented
in any conveniently accessible way. (05)
> MW: The question is one of efficiency and effectiveness.
> The use of a shared ontology for an organization reduces
> the cost of developing interfaces, and means they tend
> to be more error prone. Of course you can manage without,
> but why spend more than you need to. (06)
I'm all in favor of sharing anything that can be shared.
But I just want to emphasize that the new things have to
share with old things that were in place and operating
long before any new kid on the block was even born. (07)
> MW: ... I tell the story about early maps which would
> look hopeless now, yet made a great improvement to the
> navigation of that age. It doesn't mean that better
> maps aren't more useful though. (08)
That's an excellent analogy. And I'd like to point out
that there was never a stage when all the old maps were
thrown out and replaced overnight. More often than not,
new ships had the new technology, while old ships sailed
the same seas and went to the same ports guided by their
original technology. (One of the telephones in my house
still has an old rotary dial, and it works just fine.) (09)
GBC> The present DRM approach does not provide guidance on
> how to generate common classifications and so a simple
> hypothesis is that an agreed upon ontology will solve this
> problem. (010)
Just look at history: (011)
1. The new stuff will have to interoperate with the old stuff
for a very long time. Many companies and gov't agencies
have programs in daily use that are up to 40 years old. (012)
2. The old stuff that has been running for years is much
harder to change than any new stuff that is still in the
design stage. (013)
3. Furthermore, the only thing we can safely predict about
the future is that it will be unpredictable. (014)
4. Therefore, the burden for accommodating change is on any
new system that may be proposed. It must have means for
accommodating anything in any past system that is still
in use and any future system that may someday be invented. (015)
John Sowa (016)